data_type large_stringclasses 3
values | source large_stringclasses 29
values | code large_stringlengths 98 49.4M | filepath large_stringlengths 5 161 ⌀ | message large_stringclasses 234
values | commit large_stringclasses 234
values | subject large_stringclasses 418
values | critique large_stringlengths 101 1.26M ⌀ | metadata dict |
|---|---|---|---|---|---|---|---|---|
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | Hi Jan,
It's interesting to know the use-case of running a RT kernel over Hyper-V.
Can you give an example?
As far as I know, Hyper-V makes no RT guarantees of scheduling VPs for a VM.
Thanks,
Long | {
"author": "Long Li <longli@microsoft.com>",
"date": "Mon, 2 Feb 2026 23:47:31 +0000",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | On 03.02.26 00:47, Long Li wrote:
- functional testing of an RT base image over Hyper-V
- re-use of a common RT base image, without exploiting RT properties
This is well understood and not our goal. We only need the kernel to run
correctly over Hyper-V with PREEMPT-RT enabled, and that is not the case
right now.
Thanks,
Jan
PS: Who had to idea to drop a virtual UART from Gen 2 VMs? Early boot
guest debugging is true fun now...
--
Siemens AG, Foundational Technologies
Linux Expert Center | {
"author": "Jan Kiszka <jan.kiszka@siemens.com>",
"date": "Tue, 3 Feb 2026 06:57:46 +0100",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | On 03.02.26 06:57, Jan Kiszka wrote:
OK, after some guessing, the patched kernel boots again. So I think I
also fixed the broken vmbus IRQ patch by threading it under RT.
Currently building a kernel inside the VM while lockdep is enabled.
Boot-up and first minutes of building didn't trigger any complaints.
Will share later on.
Jan
--
Siemens AG, Foundational Technologies
Linux Expert Center | {
"author": "Jan Kiszka <jan.kiszka@siemens.com>",
"date": "Tue, 3 Feb 2026 07:10:19 +0100",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | From: Jan Kiszka <jan.kiszka@siemens.com> Sent: Monday, February 2, 2026 9:58 PM
Hmmm. I often do printk()-based debugging via a virtual UART in a Gen 2
VM. The Linux serial console outputs to that virtual UART and I see the
printk() output in PuTTY on the Windows host. What specifically are you
trying to do? I'm trying to remember if there's any unique setup required
on a Gen 2 VM vs. a Gen 1 VM, and nothing immediately comes to mind.
Though maybe it's just so baked into my process that I don't remember it!
Michael | {
"author": "Michael Kelley <mhklinux@outlook.com>",
"date": "Thu, 5 Feb 2026 05:42:02 +0000",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | On 05.02.26 06:42, Michael Kelley wrote:
Indeed:
Powershell> Set-VMComPort -VMName "Debian 13" 1 \\.\pipe\comport
<Start VM>
Powershell> putty -serial \\.\pipe\comport
Well hidden...
Jan
--
Siemens AG, Foundational Technologies
Linux Expert Center | {
"author": "Jan Kiszka <jan.kiszka@siemens.com>",
"date": "Thu, 5 Feb 2026 07:37:35 +0100",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | On Thu, 2026-01-29 at 15:30 +0100, Jan Kiszka wrote:
Tested-by: Florian Bezdeka <florian.bezdeka@siemens.com>
This patch survived a 24h stress test with CONFIG_PREEMPT_RT enabled and
heavy load applied to the system.
Without this patch - and very same system configuration - the system
will lock up within 2 minutes.
Best regards,
Florian
--
Siemens AG, Foundational Technologies
Linux Expert Center | {
"author": "\"Bezdeka, Florian\" <florian.bezdeka@siemens.com>",
"date": "Thu, 5 Feb 2026 14:09:51 +0000",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | From: Jan Kiszka <jan.kiszka@siemens.com> Sent: Wednesday, February 4, 2026 10:38 PM
I just realized that the Hyper-V "Settings" UI for a VM shows COM1 and COM2
only for Gen1 VMs. I don't know why it's not shown for Gen2 VMs. The
Powershell command you found is what I have always used.
Michael | {
"author": "Michael Kelley <mhklinux@outlook.com>",
"date": "Thu, 5 Feb 2026 19:09:01 +0000",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | From: Jan Kiszka <jan.kiszka@siemens.com> Sent: Thursday, January 29, 2026 6:31 AM
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Tested-by: Michael Kelley <mhklinux@outlook.com> | {
"author": "Michael Kelley <mhklinux@outlook.com>",
"date": "Tue, 17 Feb 2026 15:47:19 +0000",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | On Thu, 29 Jan 2026 15:30:39 +0100, Jan Kiszka wrote:
Applied to 7.0/scsi-fixes, thanks!
[1/1] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT
https://git.kernel.org/mkp/scsi/c/57297736c082
--
Martin K. Petersen | {
"author": "\"Martin K. Petersen\" <martin.petersen@oracle.com>",
"date": "Tue, 24 Feb 2026 11:47:42 -0500",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Jan Kiszka <jan.kiszka@siemens.com>
This resolves the follow splat and lock-up when running with PREEMPT_RT
enabled on Hyper-V:
[ 415.140818] BUG: scheduling while atomic: stress-ng-iomix/1048/0x00000002
[ 415.140822] INFO: lockdep is turned off.
[ 415.140823] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec ghash_clmulni_intel aesni_intel rapl binfmt_misc nls_ascii nls_cp437 vfat fat snd_pcm hyperv_drm snd_timer drm_client_lib drm_shmem_helper snd sg soundcore drm_kms_helper pcspkr hv_balloon hv_utils evdev joydev drm configfs efi_pstore nfnetlink vsock_loopback vmw_vsock_virtio_transport_common hv_sock vmw_vsock_vmci_transport vsock vmw_vmci efivarfs autofs4 ext4 crc16 mbcache jbd2 sr_mod sd_mod cdrom hv_storvsc serio_raw hid_generic scsi_transport_fc hid_hyperv scsi_mod hid hv_netvsc hyperv_keyboard scsi_common
[ 415.140846] Preemption disabled at:
[ 415.140847] [<ffffffffc0656171>] storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140854] CPU: 8 UID: 0 PID: 1048 Comm: stress-ng-iomix Not tainted 6.19.0-rc7 #30 PREEMPT_{RT,(full)}
[ 415.140856] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/04/2024
[ 415.140857] Call Trace:
[ 415.140861] <TASK>
[ 415.140861] ? storvsc_queuecommand+0x2e1/0xbe0 [hv_storvsc]
[ 415.140863] dump_stack_lvl+0x91/0xb0
[ 415.140870] __schedule_bug+0x9c/0xc0
[ 415.140875] __schedule+0xdf6/0x1300
[ 415.140877] ? rtlock_slowlock_locked+0x56c/0x1980
[ 415.140879] ? rcu_is_watching+0x12/0x60
[ 415.140883] schedule_rtlock+0x21/0x40
[ 415.140885] rtlock_slowlock_locked+0x502/0x1980
[ 415.140891] rt_spin_lock+0x89/0x1e0
[ 415.140893] hv_ringbuffer_write+0x87/0x2a0
[ 415.140899] vmbus_sendpacket_mpb_desc+0xb6/0xe0
[ 415.140900] ? rcu_is_watching+0x12/0x60
[ 415.140902] storvsc_queuecommand+0x669/0xbe0 [hv_storvsc]
[ 415.140904] ? HARDIRQ_verbose+0x10/0x10
[ 415.140908] ? __rq_qos_issue+0x28/0x40
[ 415.140911] scsi_queue_rq+0x760/0xd80 [scsi_mod]
[ 415.140926] __blk_mq_issue_directly+0x4a/0xc0
[ 415.140928] blk_mq_issue_direct+0x87/0x2b0
[ 415.140931] blk_mq_dispatch_queue_requests+0x120/0x440
[ 415.140933] blk_mq_flush_plug_list+0x7a/0x1a0
[ 415.140935] __blk_flush_plug+0xf4/0x150
[ 415.140940] __submit_bio+0x2b2/0x5c0
[ 415.140944] ? submit_bio_noacct_nocheck+0x272/0x360
[ 415.140946] submit_bio_noacct_nocheck+0x272/0x360
[ 415.140951] ext4_read_bh_lock+0x3e/0x60 [ext4]
[ 415.140995] ext4_block_write_begin+0x396/0x650 [ext4]
[ 415.141018] ? __pfx_ext4_da_get_block_prep+0x10/0x10 [ext4]
[ 415.141038] ext4_da_write_begin+0x1c4/0x350 [ext4]
[ 415.141060] generic_perform_write+0x14e/0x2c0
[ 415.141065] ext4_buffered_write_iter+0x6b/0x120 [ext4]
[ 415.141083] vfs_write+0x2ca/0x570
[ 415.141087] ksys_write+0x76/0xf0
[ 415.141089] do_syscall_64+0x99/0x1490
[ 415.141093] ? rcu_is_watching+0x12/0x60
[ 415.141095] ? finish_task_switch.isra.0+0xdf/0x3d0
[ 415.141097] ? rcu_is_watching+0x12/0x60
[ 415.141098] ? lock_release+0x1f0/0x2a0
[ 415.141100] ? rcu_is_watching+0x12/0x60
[ 415.141101] ? finish_task_switch.isra.0+0xe4/0x3d0
[ 415.141103] ? rcu_is_watching+0x12/0x60
[ 415.141104] ? __schedule+0xb34/0x1300
[ 415.141106] ? hrtimer_try_to_cancel+0x1d/0x170
[ 415.141109] ? do_nanosleep+0x8b/0x160
[ 415.141111] ? hrtimer_nanosleep+0x89/0x100
[ 415.141114] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 415.141116] ? xfd_validate_state+0x26/0x90
[ 415.141118] ? rcu_is_watching+0x12/0x60
[ 415.141120] ? do_syscall_64+0x1e0/0x1490
[ 415.141121] ? do_syscall_64+0x1e0/0x1490
[ 415.141123] ? rcu_is_watching+0x12/0x60
[ 415.141124] ? do_syscall_64+0x1e0/0x1490
[ 415.141125] ? do_syscall_64+0x1e0/0x1490
[ 415.141127] ? irqentry_exit+0x140/0x7e0
[ 415.141129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
get_cpu() disables preemption while the spinlock hv_ringbuffer_write is
using is converted to an rt-mutex under PREEMPT_RT.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
This is likely just the tip of an iceberg, see specifically [1], but if
you never start addressing it, it will continue to crash ships, even if
those are only on test cruises (we are fully aware that Hyper-V provides
no RT guarantees for guests). A pragmatic alternative to that would be a
simple
config HYPERV
depends on !PREEMPT_RT
Please share your thoughts if this fix is worth it, or if we should
better stop looking at the next splats that show up after it. We are
currently considering to thread some of the hv platform IRQs under
PREEMPT_RT as potential next step.
TIA!
[1] https://lore.kernel.org/all/20230809-b4-rt_preempt-fix-v1-0-7283bbdc8b14@gmail.com/
drivers/scsi/storvsc_drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index b43d876747b7..68c837146b9e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1855,8 +1855,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
cmd_request->payload_sz = payload_sz;
/* Invokes the vsc to start an IO */
- ret = storvsc_do_io(dev, cmd_request, get_cpu());
- put_cpu();
+ migrate_disable();
+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());
+ migrate_enable();
if (ret)
scsi_dma_unmap(scmnd);
--
2.51.0
| null | null | null | [PATCH] scsi: storvsc: Fix scheduling while atomic on PREEMPT_RT | On 24.02.26 17:47, Martin K. Petersen wrote:
Should it be here then already?
https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git/log/?h=7.0/scsi-fixes
Sorry, just trying to understand the process.
Jan
--
Siemens AG, Foundational Technologies
Linux Expert Center | {
"author": "Jan Kiszka <jan.kiszka@siemens.com>",
"date": "Fri, 27 Feb 2026 16:55:07 +0100",
"is_openbsd": false,
"thread_id": "898e9467-0c05-46b4-a3ed-518797b829c5@siemens.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | Add DMA driver and bindigns for the Amlogic SoCs.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
Changes in v3:
- Adjust the format of binding according to Frank's suggestion.
- Some code format modified according to Frank's suggestion.
- Support one prep_sg and one submit, drop multi prep_sg and one submit.
- Keep pre state when resume from pause status.
- Link to v2: https://lore.kernel.org/r/20260127-amlogic-dma-v2-0-4525d327d74d@amlogic.com
Changes in v2:
- Introduce what the DMA is used for in the A9 SoC.
- Some minor modifications were made according to Krzysztof's suggestions.
- Some modifications were made according to Neil's suggestions.
- Fix a build error.
- Link to v1: https://lore.kernel.org/r/20251216-amlogic-dma-v1-0-e289e57e96a7@amlogic.com
---
Xianwei Zhao (3):
dt-bindings: dma: Add Amlogic A9 SoC DMA
dma: amlogic: Add general DMA driver for A9
MAINTAINERS: Add an entry for Amlogic DMA driver
.../devicetree/bindings/dma/amlogic,a9-dma.yaml | 66 +++
MAINTAINERS | 7 +
drivers/dma/Kconfig | 9 +
drivers/dma/Makefile | 1 +
drivers/dma/amlogic-dma.c | 561 +++++++++++++++++++++
5 files changed, 644 insertions(+)
---
base-commit: 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2
change-id: 20251215-amlogic-dma-79477d5cd264
Best regards,
--
Xianwei Zhao <xianwei.zhao@amlogic.com> | {
"author": "Xianwei Zhao via B4 Relay <devnull+xianwei.zhao.amlogic.com@kernel.org>",
"date": "Fri, 06 Feb 2026 09:02:31 +0000",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add documentation describing the Amlogic A9 SoC DMA.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
.../devicetree/bindings/dma/amlogic,a9-dma.yaml | 66 ++++++++++++++++++++++
1 file changed, 66 insertions(+)
diff --git a/Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml b/Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
new file mode 100644
index 000000000000..3158d99a3195
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma/amlogic,a9-dma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Amlogic general DMA controller
+
+description:
+ This is a general-purpose peripheral DMA controller. It currently supports
+ major peripherals including I2C, I3C, PIO, and CAN-BUS. Transmit and receive
+ for the same peripheral use two separate channels, controlled by different
+ register sets. I2C and I3C transfer data in 1-byte units, while PIO and
+ CAN-BUS transfer data in 4-byte units. From the controller’s perspective,
+ there is no significant difference.
+
+maintainers:
+ - Xianwei Zhao <xianwei.zhao@amlogic.com>
+
+properties:
+ compatible:
+ const: amlogic,a9-dma
+
+ reg:
+ maxItems: 1
+
+ interrupts:
+ maxItems: 1
+
+ clocks:
+ maxItems: 1
+
+ clock-names:
+ const: sys
+
+ '#dma-cells':
+ const: 2
+
+ dma-channels:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ maximum: 64
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - clocks
+ - '#dma-cells'
+ - dma-channels
+
+allOf:
+ - $ref: dma-controller.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ dma-controller@fe400000{
+ compatible = "amlogic,a9-dma";
+ reg = <0xfe400000 0x4000>;
+ interrupts = <GIC_SPI 35 IRQ_TYPE_EDGE_RISING>;
+ clocks = <&clkc 45>;
+ #dma-cells = <2>;
+ dma-channels = <28>;
+ };
--
2.52.0 | {
"author": "Xianwei Zhao via B4 Relay <devnull+xianwei.zhao.amlogic.com@kernel.org>",
"date": "Fri, 06 Feb 2026 09:02:32 +0000",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Amlogic A9 SoCs include a general-purpose DMA controller that can be used
by multiple peripherals, such as I2C PIO and I3C. Each peripheral group
is associated with a dedicated DMA channel in hardware.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
drivers/dma/Kconfig | 9 +
drivers/dma/Makefile | 1 +
drivers/dma/amlogic-dma.c | 561 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 571 insertions(+)
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 66cda7cc9f7a..8d4578513acf 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -85,6 +85,15 @@ config AMCC_PPC440SPE_ADMA
help
Enable support for the AMCC PPC440SPe RAID engines.
+config AMLOGIC_DMA
+ tristate "Amlogic general DMA support"
+ depends on ARCH_MESON || COMPILE_TEST
+ select DMA_ENGINE
+ select REGMAP_MMIO
+ help
+ Enable support for the Amlogic general DMA engines. THis DMA
+ controller is used some Amlogic SoCs, such as A9.
+
config APPLE_ADMAC
tristate "Apple ADMAC support"
depends on ARCH_APPLE || COMPILE_TEST
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index a54d7688392b..fc28dade5b69 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -16,6 +16,7 @@ obj-$(CONFIG_DMATEST) += dmatest.o
obj-$(CONFIG_ALTERA_MSGDMA) += altera-msgdma.o
obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o
obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
+obj-$(CONFIG_AMLOGIC_DMA) += amlogic-dma.o
obj-$(CONFIG_APPLE_ADMAC) += apple-admac.o
obj-$(CONFIG_ARM_DMA350) += arm-dma350.o
obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
diff --git a/drivers/dma/amlogic-dma.c b/drivers/dma/amlogic-dma.c
new file mode 100644
index 000000000000..cbecbde7857b
--- /dev/null
+++ b/drivers/dma/amlogic-dma.c
@@ -0,0 +1,561 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR MIT)
+/*
+ * Copyright (C) 2025 Amlogic, Inc. All rights reserved
+ * Author: Xianwei Zhao <xianwei.zhao@amlogic.com>
+ */
+
+#include <asm/irq.h>
+#include <linux/bitfield.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmaengine.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "dmaengine.h"
+
+#define RCH_REG_BASE 0x0
+#define WCH_REG_BASE 0x2000
+/*
+ * Each rch (read from memory) REG offset Rch_offset 0x0 each channel total 0x40
+ * rch addr = DMA_base + Rch_offset+ chan_id * 0x40 + reg_offset
+ */
+#define RCH_READY 0x0
+#define RCH_STATUS 0x4
+#define RCH_CFG 0x8
+#define CFG_CLEAR BIT(25)
+#define CFG_PAUSE BIT(26)
+#define CFG_ENABLE BIT(27)
+#define CFG_DONE BIT(28)
+#define RCH_ADDR 0xc
+#define RCH_LEN 0x10
+#define RCH_RD_LEN 0x14
+#define RCH_PRT 0x18
+#define RCH_SYCN_STAT 0x1c
+#define RCH_ADDR_LOW 0x20
+#define RCH_ADDR_HIGH 0x24
+/* if work on 64, it work with RCH_PRT */
+#define RCH_PTR_HIGH 0x28
+
+/*
+ * Each wch (write to memory) REG offset Wch_offset 0x2000 each channel total 0x40
+ * wch addr = DMA_base + Wch_offset+ chan_id * 0x40 + reg_offset
+ */
+#define WCH_READY 0x0
+#define WCH_TOTAL_LEN 0x4
+#define WCH_CFG 0x8
+#define WCH_ADDR 0xc
+#define WCH_LEN 0x10
+#define WCH_RD_LEN 0x14
+#define WCH_PRT 0x18
+#define WCH_CMD_CNT 0x1c
+#define WCH_ADDR_LOW 0x20
+#define WCH_ADDR_HIGH 0x24
+/* if work on 64, it work with RCH_PRT */
+#define WCH_PTR_HIGH 0x28
+
+/* DMA controller reg */
+#define RCH_INT_MASK 0x1000
+#define WCH_INT_MASK 0x1004
+#define CLEAR_W_BATCH 0x1014
+#define CLEAR_RCH 0x1024
+#define CLEAR_WCH 0x1028
+#define RCH_ACTIVE 0x1038
+#define WCH_ACTIVE 0x103c
+#define RCH_DONE 0x104c
+#define WCH_DONE 0x1050
+#define RCH_ERR 0x1060
+#define RCH_LEN_ERR 0x1064
+#define WCH_ERR 0x1068
+#define DMA_BATCH_END 0x1078
+#define WCH_EOC_DONE 0x1088
+#define WDMA_RESP_ERR 0x1098
+#define UPT_PKT_SYNC 0x10a8
+#define RCHN_CFG 0x10ac
+#define WCHN_CFG 0x10b0
+#define MEM_PD_CFG 0x10b4
+#define MEM_BUS_CFG 0x10b8
+#define DMA_GMV_CFG 0x10bc
+#define DMA_GMR_CFG 0x10c0
+
+#define AML_DMA_TYPE_TX 0
+#define AML_DMA_TYPE_RX 1
+#define DMA_MAX_LINK 8
+#define MAX_CHAN_ID 32
+#define SG_MAX_LEN GENMASK(26, 0)
+
+struct aml_dma_sg_link {
+#define LINK_LEN GENMASK(26, 0)
+#define LINK_IRQ BIT(27)
+#define LINK_EOC BIT(28)
+#define LINK_LOOP BIT(29)
+#define LINK_ERR BIT(30)
+#define LINK_OWNER BIT(31)
+ u32 ctl;
+ u64 address;
+ u32 revered;
+} __packed;
+
+struct aml_dma_chan {
+ struct dma_chan chan;
+ struct dma_async_tx_descriptor desc;
+ struct aml_dma_dev *aml_dma;
+ struct aml_dma_sg_link *sg_link;
+ dma_addr_t sg_link_phys;
+ int sg_link_cnt;
+ int data_len;
+ enum dma_status pre_status;
+ enum dma_status status;
+ enum dma_transfer_direction direction;
+ int chan_id;
+ /* reg_base (direction + chan_id) */
+ int reg_offs;
+};
+
+struct aml_dma_dev {
+ struct dma_device dma_device;
+ void __iomem *base;
+ struct regmap *regmap;
+ struct clk *clk;
+ int irq;
+ struct platform_device *pdev;
+ struct aml_dma_chan *aml_rch[MAX_CHAN_ID];
+ struct aml_dma_chan *aml_wch[MAX_CHAN_ID];
+ unsigned int chan_nr;
+ unsigned int chan_used;
+ struct aml_dma_chan aml_chans[]__counted_by(chan_nr);
+};
+
+static struct aml_dma_chan *to_aml_dma_chan(struct dma_chan *chan)
+{
+ return container_of(chan, struct aml_dma_chan, chan);
+}
+
+static dma_cookie_t aml_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+ return dma_cookie_assign(tx);
+}
+
+static int aml_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+ size_t size = size_mul(sizeof(struct aml_dma_sg_link), DMA_MAX_LINK);
+
+ aml_chan->sg_link = dma_alloc_coherent(aml_dma->dma_device.dev, size,
+ &aml_chan->sg_link_phys, GFP_KERNEL);
+ if (!aml_chan->sg_link)
+ return -ENOMEM;
+
+ /* offset is the same RCH_CFG and WCH_CFG */
+ regmap_update_bits(aml_dma->regmap, aml_chan->reg_offs + RCH_CFG, CFG_CLEAR, CFG_CLEAR);
+ aml_chan->status = DMA_COMPLETE;
+ dma_async_tx_descriptor_init(&aml_chan->desc, chan);
+ aml_chan->desc.tx_submit = aml_dma_tx_submit;
+ regmap_update_bits(aml_dma->regmap, aml_chan->reg_offs + RCH_CFG, CFG_CLEAR, 0);
+
+ return 0;
+}
+
+static void aml_dma_free_chan_resources(struct dma_chan *chan)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+
+ aml_chan->status = DMA_COMPLETE;
+ dma_free_coherent(aml_dma->dma_device.dev,
+ sizeof(struct aml_dma_sg_link) * DMA_MAX_LINK,
+ aml_chan->sg_link, aml_chan->sg_link_phys);
+}
+
+/* DMA transfer state update how many data reside it */
+static enum dma_status aml_dma_tx_status(struct dma_chan *chan,
+ dma_cookie_t cookie,
+ struct dma_tx_state *txstate)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+ u32 residue, done;
+
+ regmap_read(aml_dma->regmap, aml_chan->reg_offs + RCH_RD_LEN, &done);
+ residue = aml_chan->data_len - done;
+ dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie,
+ residue);
+
+ return aml_chan->status;
+}
+
+static struct dma_async_tx_descriptor *aml_dma_prep_slave_sg
+ (struct dma_chan *chan, struct scatterlist *sgl,
+ unsigned int sg_len, enum dma_transfer_direction direction,
+ unsigned long flags, void *context)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+ struct aml_dma_sg_link *sg_link;
+ struct scatterlist *sg;
+ int idx = 0;
+ u32 reg, chan_id;
+ u32 i;
+
+ if (aml_chan->direction != direction) {
+ dev_err(aml_dma->dma_device.dev, "direction not support\n");
+ return NULL;
+ }
+
+ switch (aml_chan->status) {
+ case DMA_IN_PROGRESS:
+ dev_err(aml_dma->dma_device.dev, "not support multi tx_desciptor\n");
+ return NULL;
+
+ case DMA_COMPLETE:
+ aml_chan->data_len = 0;
+ chan_id = aml_chan->chan_id;
+ reg = (direction == DMA_DEV_TO_MEM) ? WCH_INT_MASK : RCH_INT_MASK;
+ regmap_update_bits(aml_dma->regmap, reg, BIT(chan_id), BIT(chan_id));
+
+ break;
+ default:
+ dev_err(aml_dma->dma_device.dev, "status error\n");
+ return NULL;
+ }
+
+ if (sg_len > DMA_MAX_LINK) {
+ dev_err(aml_dma->dma_device.dev,
+ "maximum number of sg exceeded: %d > %d\n",
+ sg_len, DMA_MAX_LINK);
+ aml_chan->status = DMA_ERROR;
+ return NULL;
+ }
+
+ aml_chan->status = DMA_IN_PROGRESS;
+
+ for_each_sg(sgl, sg, sg_len, i) {
+ if (sg_dma_len(sg) > SG_MAX_LEN) {
+ dev_err(aml_dma->dma_device.dev,
+ "maximum bytes exceeded: %u > %lu\n",
+ sg_dma_len(sg), SG_MAX_LEN);
+ aml_chan->status = DMA_ERROR;
+ return NULL;
+ }
+ sg_link = &aml_chan->sg_link[idx++];
+ /* set dma address and len to sglink*/
+ sg_link->address = sg->dma_address;
+ sg_link->ctl = FIELD_PREP(LINK_LEN, sg_dma_len(sg));
+
+ aml_chan->data_len += sg_dma_len(sg);
+ }
+ aml_chan->sg_link_cnt = idx;
+
+ return &aml_chan->desc;
+}
+
+static int aml_dma_pause_chan(struct dma_chan *chan)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+
+ regmap_update_bits(aml_dma->regmap, aml_chan->reg_offs + RCH_CFG, CFG_PAUSE, CFG_PAUSE);
+ aml_chan->pre_status = aml_chan->status;
+ aml_chan->status = DMA_PAUSED;
+
+ return 0;
+}
+
+static int aml_dma_resume_chan(struct dma_chan *chan)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+
+ regmap_update_bits(aml_dma->regmap, aml_chan->reg_offs + RCH_CFG, CFG_PAUSE, 0);
+ aml_chan->status = aml_chan->pre_status;
+
+ return 0;
+}
+
+static int aml_dma_terminate_all(struct dma_chan *chan)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+ int chan_id = aml_chan->chan_id;
+
+ aml_dma_pause_chan(chan);
+ regmap_update_bits(aml_dma->regmap, aml_chan->reg_offs + RCH_CFG, CFG_CLEAR, CFG_CLEAR);
+
+ if (aml_chan->direction == DMA_MEM_TO_DEV)
+ regmap_update_bits(aml_dma->regmap, RCH_INT_MASK, BIT(chan_id), BIT(chan_id));
+ else if (aml_chan->direction == DMA_DEV_TO_MEM)
+ regmap_update_bits(aml_dma->regmap, WCH_INT_MASK, BIT(chan_id), BIT(chan_id));
+
+ aml_chan->status = DMA_COMPLETE;
+
+ return 0;
+}
+
+static void aml_dma_enable_chan(struct dma_chan *chan)
+{
+ struct aml_dma_chan *aml_chan = to_aml_dma_chan(chan);
+ struct aml_dma_dev *aml_dma = aml_chan->aml_dma;
+ struct aml_dma_sg_link *sg_link;
+ int chan_id = aml_chan->chan_id;
+ int idx = aml_chan->sg_link_cnt - 1;
+
+ /* the last sg set eoc flag */
+ sg_link = &aml_chan->sg_link[idx];
+ sg_link->ctl |= LINK_EOC;
+ if (aml_chan->direction == DMA_MEM_TO_DEV) {
+ regmap_write(aml_dma->regmap, aml_chan->reg_offs + RCH_ADDR,
+ aml_chan->sg_link_phys);
+ regmap_write(aml_dma->regmap, aml_chan->reg_offs + RCH_LEN, aml_chan->data_len);
+ regmap_update_bits(aml_dma->regmap, RCH_INT_MASK, BIT(chan_id), 0);
+ /* for rch (tx) need set cfg 0 to trigger start */
+ regmap_write(aml_dma->regmap, aml_chan->reg_offs + RCH_CFG, 0);
+ } else if (aml_chan->direction == DMA_DEV_TO_MEM) {
+ regmap_write(aml_dma->regmap, aml_chan->reg_offs + WCH_ADDR,
+ aml_chan->sg_link_phys);
+ regmap_write(aml_dma->regmap, aml_chan->reg_offs + WCH_LEN, aml_chan->data_len);
+ regmap_update_bits(aml_dma->regmap, WCH_INT_MASK, BIT(chan_id), 0);
+ }
+}
+
+static irqreturn_t aml_dma_interrupt_handler(int irq, void *dev_id)
+{
+ struct aml_dma_dev *aml_dma = dev_id;
+ struct aml_dma_chan *aml_chan;
+ u32 done, eoc_done, err, err_l, end;
+ int i = 0;
+
+ /* deal with rch normal complete and error */
+ regmap_read(aml_dma->regmap, RCH_DONE, &done);
+ regmap_read(aml_dma->regmap, RCH_ERR, &err);
+ regmap_read(aml_dma->regmap, RCH_LEN_ERR, &err_l);
+ err = err | err_l;
+
+ done = done | err;
+
+ while (done) {
+ i = ffs(done) - 1;
+ aml_chan = aml_dma->aml_rch[i];
+ regmap_write(aml_dma->regmap, CLEAR_RCH, BIT(aml_chan->chan_id));
+ if (!aml_chan) {
+ dev_err(aml_dma->dma_device.dev, "idx %d rch not initialized\n", i);
+ done &= ~BIT(i);
+ continue;
+ }
+ aml_chan->status = (err & (1 << i)) ? DMA_ERROR : DMA_COMPLETE;
+ dma_cookie_complete(&aml_chan->desc);
+ dmaengine_desc_get_callback_invoke(&aml_chan->desc, NULL);
+ done &= ~BIT(i);
+ }
+
+ /* deal with wch normal complete and error */
+ regmap_read(aml_dma->regmap, DMA_BATCH_END, &end);
+ if (end)
+ regmap_write(aml_dma->regmap, CLEAR_W_BATCH, end);
+
+ regmap_read(aml_dma->regmap, WCH_DONE, &done);
+ regmap_read(aml_dma->regmap, WCH_EOC_DONE, &eoc_done);
+ done = done | eoc_done;
+
+ regmap_read(aml_dma->regmap, WCH_ERR, &err);
+ regmap_read(aml_dma->regmap, WDMA_RESP_ERR, &err_l);
+ err = err | err_l;
+
+ done = done | err;
+ i = 0;
+ while (done) {
+ i = ffs(done) - 1;
+ aml_chan = aml_dma->aml_wch[i];
+ regmap_write(aml_dma->regmap, CLEAR_WCH, BIT(aml_chan->chan_id));
+ if (!aml_chan) {
+ dev_err(aml_dma->dma_device.dev, "idx %d wch not initialized\n", i);
+ done &= ~BIT(i);
+ continue;
+ }
+ aml_chan->status = (err & (1 << i)) ? DMA_ERROR : DMA_COMPLETE;
+ dma_cookie_complete(&aml_chan->desc);
+ dmaengine_desc_get_callback_invoke(&aml_chan->desc, NULL);
+ done &= ~BIT(i);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static struct dma_chan *aml_of_dma_xlate(struct of_phandle_args *dma_spec, struct of_dma *ofdma)
+{
+ struct aml_dma_dev *aml_dma = (struct aml_dma_dev *)ofdma->of_dma_data;
+ struct aml_dma_chan *aml_chan = NULL;
+ u32 type;
+ u32 phy_chan_id;
+
+ if (dma_spec->args_count != 2)
+ return NULL;
+
+ type = dma_spec->args[0];
+ phy_chan_id = dma_spec->args[1];
+
+ if (phy_chan_id >= MAX_CHAN_ID)
+ return NULL;
+
+ if (type == AML_DMA_TYPE_TX) {
+ aml_chan = aml_dma->aml_rch[phy_chan_id];
+ if (!aml_chan) {
+ if (aml_dma->chan_used >= aml_dma->chan_nr) {
+ dev_err(aml_dma->dma_device.dev, "some dma clients err used\n");
+ return NULL;
+ }
+ aml_chan = &aml_dma->aml_chans[aml_dma->chan_used];
+ aml_dma->chan_used++;
+ aml_chan->direction = DMA_MEM_TO_DEV;
+ aml_chan->chan_id = phy_chan_id;
+ aml_chan->reg_offs = RCH_REG_BASE + 0x40 * aml_chan->chan_id;
+ aml_dma->aml_rch[phy_chan_id] = aml_chan;
+ }
+ } else if (type == AML_DMA_TYPE_RX) {
+ aml_chan = aml_dma->aml_wch[phy_chan_id];
+ if (!aml_chan) {
+ if (aml_dma->chan_used >= aml_dma->chan_nr) {
+ dev_err(aml_dma->dma_device.dev, "some dma clients err used\n");
+ return NULL;
+ }
+ aml_chan = &aml_dma->aml_chans[aml_dma->chan_used];
+ aml_dma->chan_used++;
+ aml_chan->direction = DMA_DEV_TO_MEM;
+ aml_chan->chan_id = phy_chan_id;
+ aml_chan->reg_offs = WCH_REG_BASE + 0x40 * aml_chan->chan_id;
+ aml_dma->aml_wch[phy_chan_id] = aml_chan;
+ }
+ } else {
+ dev_err(aml_dma->dma_device.dev, "type %d not supported\n", type);
+ return NULL;
+ }
+
+ return dma_get_slave_channel(&aml_chan->chan);
+}
+
+static int aml_dma_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct dma_device *dma_dev;
+ struct aml_dma_dev *aml_dma;
+ int ret, i, len;
+ u32 chan_nr;
+
+ const struct regmap_config aml_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x3000,
+ };
+
+ ret = of_property_read_u32(np, "dma-channels", &chan_nr);
+ if (ret)
+ return dev_err_probe(&pdev->dev, ret, "failed to read dma-channels\n");
+
+ len = sizeof(*aml_dma) + sizeof(struct aml_dma_chan) * chan_nr;
+ aml_dma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
+ if (!aml_dma)
+ return -ENOMEM;
+
+ aml_dma->chan_nr = chan_nr;
+
+ aml_dma->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(aml_dma->base))
+ return PTR_ERR(aml_dma->base);
+
+ aml_dma->regmap = devm_regmap_init_mmio(&pdev->dev, aml_dma->base,
+ &aml_regmap_config);
+ if (IS_ERR_OR_NULL(aml_dma->regmap))
+ return PTR_ERR(aml_dma->regmap);
+
+ aml_dma->clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(aml_dma->clk))
+ return PTR_ERR(aml_dma->clk);
+
+ aml_dma->irq = platform_get_irq(pdev, 0);
+
+ aml_dma->pdev = pdev;
+ aml_dma->dma_device.dev = &pdev->dev;
+
+ dma_dev = &aml_dma->dma_device;
+ INIT_LIST_HEAD(&dma_dev->channels);
+
+ /* Initialize channel parameters */
+ for (i = 0; i < chan_nr; i++) {
+ struct aml_dma_chan *aml_chan = &aml_dma->aml_chans[i];
+
+ aml_chan->aml_dma = aml_dma;
+ aml_chan->chan.device = &aml_dma->dma_device;
+ dma_cookie_init(&aml_chan->chan);
+
+ /* Add the channel to aml_chan list */
+ list_add_tail(&aml_chan->chan.device_node,
+ &aml_dma->dma_device.channels);
+ }
+ aml_dma->chan_used = 0;
+
+ dma_set_max_seg_size(dma_dev->dev, SG_MAX_LEN);
+
+ dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
+ dma_dev->device_alloc_chan_resources = aml_dma_alloc_chan_resources;
+ dma_dev->device_free_chan_resources = aml_dma_free_chan_resources;
+ dma_dev->device_tx_status = aml_dma_tx_status;
+ dma_dev->device_prep_slave_sg = aml_dma_prep_slave_sg;
+
+ dma_dev->device_pause = aml_dma_pause_chan;
+ dma_dev->device_resume = aml_dma_resume_chan;
+ dma_dev->device_terminate_all = aml_dma_terminate_all;
+ dma_dev->device_issue_pending = aml_dma_enable_chan;
+ /* PIO 4 bytes and I2C 1 byte */
+ dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES | DMA_SLAVE_BUSWIDTH_1_BYTE);
+ dma_dev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+
+ ret = dmaenginem_async_device_register(dma_dev);
+ if (ret)
+ return dev_err_probe(&pdev->dev, ret, "failed to register dmaenginem\n");
+
+ ret = of_dma_controller_register(np, aml_of_dma_xlate, aml_dma);
+ if (ret)
+ return ret;
+
+ regmap_write(aml_dma->regmap, RCH_INT_MASK, 0xffffffff);
+ regmap_write(aml_dma->regmap, WCH_INT_MASK, 0xffffffff);
+
+ ret = devm_request_irq(&pdev->dev, aml_dma->irq, aml_dma_interrupt_handler,
+ IRQF_SHARED, dev_name(&pdev->dev), aml_dma);
+ if (ret)
+ return dev_err_probe(&pdev->dev, ret, "failed to reqest_irq\n");
+
+ return 0;
+}
+
+static const struct of_device_id aml_dma_ids[] = {
+ { .compatible = "amlogic,a9-dma", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, aml_dma_ids);
+
+static struct platform_driver aml_dma_driver = {
+ .probe = aml_dma_probe,
+ .driver = {
+ .name = "aml-dma",
+ .of_match_table = aml_dma_ids,
+ },
+};
+
+module_platform_driver(aml_dma_driver);
+
+MODULE_DESCRIPTION("GENERAL DMA driver for Amlogic");
+MODULE_AUTHOR("Xianwei Zhao <xianwei.zhao@amlogic.com>");
+MODULE_LICENSE("GPL");
--
2.52.0 | {
"author": "Xianwei Zhao via B4 Relay <devnull+xianwei.zhao.amlogic.com@kernel.org>",
"date": "Fri, 06 Feb 2026 09:02:33 +0000",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | Hi Xianwei,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2]
url: https://github.com/intel-lab-lkp/linux/commits/Xianwei-Zhao-via-B4-Relay/dt-bindings-dma-Add-Amlogic-A9-SoC-DMA/20260206-170903
base: 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2
patch link: https://lore.kernel.org/r/20260206-amlogic-dma-v3-2-56fb9f59ed22%40amlogic.com
patch subject: [PATCH v3 2/3] dma: amlogic: Add general DMA driver for A9
config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20260207/202602070253.hZ9PqUeB-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260207/202602070253.hZ9PqUeB-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602070253.hZ9PqUeB-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from drivers/dma/amlogic-dma.c:7:
39 | extern void __handle_irq(struct irq_desc *desc, struct pt_regs *regs);
| ^~~~~~~
44 | void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
| ^~~~~~~
vim +39 arch/x86/include/asm/irq.h
a782a7e46bb508 arch/x86/include/asm/irq.h Thomas Gleixner 2015-08-02 38
7c2a57364cae0f arch/x86/include/asm/irq.h Thomas Gleixner 2020-05-21 @39 extern void __handle_irq(struct irq_desc *desc, struct pt_regs *regs);
22067d4501bfb4 include/asm-x86/irq.h Thomas Gleixner 2008-05-02 40
d9112f43021554 arch/x86/include/asm/irq.h Thomas Gleixner 2009-08-20 41 extern void init_ISA_irqs(void);
d9112f43021554 arch/x86/include/asm/irq.h Thomas Gleixner 2009-08-20 42
b52e0a7c4e4100 arch/x86/include/asm/irq.h Michel Lespinasse 2013-06-06 43 #ifdef CONFIG_X86_LOCAL_APIC
9a01c3ed5cdb35 arch/x86/include/asm/irq.h Chris Metcalf 2016-10-07 @44 void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
8d539b84f1e347 arch/x86/include/asm/irq.h Douglas Anderson 2023-08-04 45 int exclude_cpu);
89f579ce99f7e0 arch/x86/include/asm/irq.h Yi Wang 2018-11-22 46
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki | {
"author": "kernel test robot <lkp@intel.com>",
"date": "Sat, 7 Feb 2026 03:08:01 +0800",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | On Fri, Feb 06, 2026 at 09:02:32AM +0000, Xianwei Zhao wrote:
Needn't it, which is standard proptery.
Frank | {
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Fri, 6 Feb 2026 14:33:42 -0500",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | On Fri, Feb 06, 2026 at 09:02:33AM +0000, Xianwei Zhao wrote:
Leave to Vinod Koul to do decide. This is not preferred implement to prep
tx descriptior.
why not split it and use mult sg_link to transfer it?
there are help functions sg_nents_for_dma()
This is DT ABI, should create header file in include/binding/dma
use struct_size
where call of_dma_controller_free() ?
Frank | {
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Fri, 6 Feb 2026 14:48:09 -0500",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | Hi Xianwei,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2]
url: https://github.com/intel-lab-lkp/linux/commits/Xianwei-Zhao-via-B4-Relay/dt-bindings-dma-Add-Amlogic-A9-SoC-DMA/20260206-170903
base: 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2
patch link: https://lore.kernel.org/r/20260206-amlogic-dma-v3-2-56fb9f59ed22%40amlogic.com
patch subject: [PATCH v3 2/3] dma: amlogic: Add general DMA driver for A9
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20260207/202602070404.wKMJf0YW-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260207/202602070404.wKMJf0YW-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602070404.wKMJf0YW-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from drivers/dma/amlogic-dma.c:7:
39 | extern void __handle_irq(struct irq_desc *desc, struct pt_regs *regs);
| ^
44 | void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
| ^
2 warnings generated.
vim +39 arch/x86/include/asm/irq.h
a782a7e46bb508 arch/x86/include/asm/irq.h Thomas Gleixner 2015-08-02 38
7c2a57364cae0f arch/x86/include/asm/irq.h Thomas Gleixner 2020-05-21 @39 extern void __handle_irq(struct irq_desc *desc, struct pt_regs *regs);
22067d4501bfb4 include/asm-x86/irq.h Thomas Gleixner 2008-05-02 40
d9112f43021554 arch/x86/include/asm/irq.h Thomas Gleixner 2009-08-20 41 extern void init_ISA_irqs(void);
d9112f43021554 arch/x86/include/asm/irq.h Thomas Gleixner 2009-08-20 42
b52e0a7c4e4100 arch/x86/include/asm/irq.h Michel Lespinasse 2013-06-06 43 #ifdef CONFIG_X86_LOCAL_APIC
9a01c3ed5cdb35 arch/x86/include/asm/irq.h Chris Metcalf 2016-10-07 @44 void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
8d539b84f1e347 arch/x86/include/asm/irq.h Douglas Anderson 2023-08-04 45 int exclude_cpu);
89f579ce99f7e0 arch/x86/include/asm/irq.h Yi Wang 2018-11-22 46
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki | {
"author": "kernel test robot <lkp@intel.com>",
"date": "Sat, 7 Feb 2026 04:33:26 +0800",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | Hi Xianwei,
kernel test robot noticed the following build errors:
[auto build test ERROR on 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2]
url: https://github.com/intel-lab-lkp/linux/commits/Xianwei-Zhao-via-B4-Relay/dt-bindings-dma-Add-Amlogic-A9-SoC-DMA/20260206-170903
base: 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2
patch link: https://lore.kernel.org/r/20260206-amlogic-dma-v3-2-56fb9f59ed22%40amlogic.com
patch subject: [PATCH v3 2/3] dma: amlogic: Add general DMA driver for A9
config: mips-allyesconfig (https://download.01.org/0day-ci/archive/20260207/202602070410.F1U5kBFE-lkp@intel.com/config)
compiler: mips-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260207/202602070410.F1U5kBFE-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602070410.F1U5kBFE-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from include/linux/thread_info.h:60,
from include/asm-generic/preempt.h:5,
from ./arch/mips/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:79,
from include/linux/smp.h:116,
from arch/mips/include/asm/irq.h:13,
from drivers/dma/amlogic-dma.c:7:
arch/mips/include/asm/irq.h: In function 'on_irq_stack':
98 | #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
| ^~~~~~~~~
arch/mips/include/asm/irq.h:19:41: note: in expansion of macro 'THREAD_SIZE'
19 | #define IRQ_STACK_SIZE THREAD_SIZE
| ^~~~~~~~~~~
arch/mips/include/asm/irq.h:41:36: note: in expansion of macro 'IRQ_STACK_SIZE'
41 | unsigned long high = low + IRQ_STACK_SIZE;
| ^~~~~~~~~~~~~~
arch/mips/include/asm/thread_info.h:98:22: note: each undeclared identifier is reported only once for each function it appears in
98 | #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
| ^~~~~~~~~
arch/mips/include/asm/irq.h:19:41: note: in expansion of macro 'THREAD_SIZE'
19 | #define IRQ_STACK_SIZE THREAD_SIZE
| ^~~~~~~~~~~
arch/mips/include/asm/irq.h:41:36: note: in expansion of macro 'IRQ_STACK_SIZE'
41 | unsigned long high = low + IRQ_STACK_SIZE;
| ^~~~~~~~~~~~~~
vim +98 arch/mips/include/asm/thread_info.h
^1da177e4c3f41 include/asm-mips/thread_info.h Linus Torvalds 2005-04-16 97
^1da177e4c3f41 include/asm-mips/thread_info.h Linus Torvalds 2005-04-16 @98 #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
^1da177e4c3f41 include/asm-mips/thread_info.h Linus Torvalds 2005-04-16 99 #define THREAD_MASK (THREAD_SIZE - 1UL)
^1da177e4c3f41 include/asm-mips/thread_info.h Linus Torvalds 2005-04-16 100
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki | {
"author": "kernel test robot <lkp@intel.com>",
"date": "Sat, 7 Feb 2026 04:54:53 +0800",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | Hi Xianwei Zhao,
On Fri, Feb 6, 2026 at 10:03 AM Xianwei Zhao via B4 Relay
<devnull+xianwei.zhao.amlogic.com@kernel.org> wrote:
[...]
I have not seen this way of writing two bits before.
Should this be:
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | BIT(DMA_SLAVE_BUSWIDTH_1_BYTE)
instead (similar to the line below)?
Best regards,
Martin | {
"author": "Martin Blumenstingl <martin.blumenstingl@googlemail.com>",
"date": "Mon, 9 Feb 2026 22:28:34 +0100",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | From: Xianwei Zhao <xianwei.zhao@amlogic.com>
Add Amlogic DMA controller entry to MAINTAINERS to clarify
the maintainers.
Signed-off-by: Xianwei Zhao <xianwei.zhao@amlogic.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5b11839cba9d..9b471d580b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1305,6 +1305,13 @@ F: Documentation/devicetree/bindings/perf/amlogic,g12-ddr-pmu.yaml
F: drivers/perf/amlogic/
F: include/soc/amlogic/
+AMLOGIC DMA DRIVER
+M: Xianwei Zhao <xianwei.zhao@amlogic.com>
+L: linux-amlogic@lists.infradead.org
+S: Maintained
+F: Documentation/devicetree/bindings/dma/amlogic,a9-dma.yaml
+F: drivers/dma/amlogic-dma.c
+
AMLOGIC ISP DRIVER
M: Keke Li <keke.li@amlogic.com>
L: linux-media@vger.kernel.org
--
2.52.0
| null | null | null | [PATCH v3 3/3] MAINTAINERS: Add an entry for Amlogic DMA driver | On Fri, Feb 06, 2026 at 09:02:33AM +0000, Xianwei Zhao via B4 Relay wrote:
subject should dmaegine: amlogic: ...
Frank | {
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Fri, 27 Feb 2026 10:54:58 -0500",
"is_openbsd": false,
"thread_id": "20260206-amlogic-dma-v3-0-56fb9f59ed22@amlogic.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The nominal duration for an EEVDF task to run is until its deadline. At
which point the deadline is moved ahead and a new task selection is done.
Try and predict the time 'lost' to higher scheduling classes. Since this is
an estimate, the timer can be both early or late. In case it is early
task_tick_fair() will take the !need_resched() path and restarts the timer.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
---
kernel/sched/fair.c | 43 ++++++++++++++++++++++++++++---------------
1 file changed, 28 insertions(+), 15 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6735,21 +6735,37 @@ static inline void sched_fair_update_sto
static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
{
struct sched_entity *se = &p->se;
+ unsigned long scale = 1024;
+ unsigned long util = 0;
+ u64 vdelta;
+ u64 delta;
WARN_ON_ONCE(task_rq(p) != rq);
- if (rq->cfs.h_nr_queued > 1) {
- u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime;
- u64 slice = se->slice;
- s64 delta = slice - ran;
-
- if (delta < 0) {
- if (task_current_donor(rq, p))
- resched_curr(rq);
- return;
- }
- hrtick_start(rq, delta);
+ if (rq->cfs.h_nr_queued <= 1)
+ return;
+
+ /*
+ * Compute time until virtual deadline
+ */
+ vdelta = se->deadline - se->vruntime;
+ if ((s64)vdelta < 0) {
+ if (task_current_donor(rq, p))
+ resched_curr(rq);
+ return;
}
+ delta = (se->load.weight * vdelta) / NICE_0_LOAD;
+
+ /*
+ * Correct for instantaneous load of other classes.
+ */
+ util += cpu_util_irq(rq);
+ if (util && util < 1024) {
+ scale *= 1024;
+ scale /= (1024 - util);
+ }
+
+ hrtick_start(rq, (scale * delta) / 1024);
}
/*
@@ -13365,11 +13381,8 @@ static void task_tick_fair(struct rq *rq
entity_tick(cfs_rq, se, queued);
}
- if (queued) {
- if (!need_resched())
- hrtick_start_fair(rq, curr);
+ if (queued)
return;
- }
if (static_branch_unlikely(&sched_numa_balancing))
task_tick_numa(rq, curr); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:17 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra (Intel) <peterz@infradead.org>
hrtick_update() was needed when the slice depended on nr_running, all that
code is gone. All that remains is starting the hrtick when nr_running
becomes more than 1.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
---
kernel/sched/fair.c | 12 ++++--------
kernel/sched/sched.h | 4 ++++
2 files changed, 8 insertions(+), 8 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6769,9 +6769,7 @@ static void hrtick_start_fair(struct rq
}
/*
- * called from enqueue/dequeue and updates the hrtick when the
- * current task is from our class and nr_running is low enough
- * to matter.
+ * Called on enqueue to start the hrtick when h_nr_queued becomes more than 1.
*/
static void hrtick_update(struct rq *rq)
{
@@ -6780,6 +6778,9 @@ static void hrtick_update(struct rq *rq)
if (!hrtick_enabled_fair(rq) || donor->sched_class != &fair_sched_class)
return;
+ if (hrtick_active(rq))
+ return;
+
hrtick_start_fair(rq, donor);
}
#else /* !CONFIG_SCHED_HRTICK: */
@@ -7102,9 +7103,6 @@ static int dequeue_entities(struct rq *r
WARN_ON_ONCE(!task_sleep);
WARN_ON_ONCE(p->on_rq != 1);
- /* Fix-up what dequeue_task_fair() skipped */
- hrtick_update(rq);
-
/*
* Fix-up what block_task() skipped.
*
@@ -7138,8 +7136,6 @@ static bool dequeue_task_fair(struct rq
/*
* Must not reference @p after dequeue_entities(DEQUEUE_DELAYED).
*/
-
- hrtick_update(rq);
return true;
}
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3041,6 +3041,10 @@ static inline int hrtick_enabled_dl(stru
}
extern void hrtick_start(struct rq *rq, u64 delay);
+static inline bool hrtick_active(struct rq *rq)
+{
+ return hrtimer_active(&rq->hrtick_timer);
+}
#else /* !CONFIG_SCHED_HRTICK: */ | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:22 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra (Intel) <peterz@infradead.org>
Since the tick causes hard preemption, the hrtick should too.
Letting the hrtick do lazy preemption completely defeats the purpose, since
it will then still be delayed until a old tick and be dependent on
CONFIG_HZ.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5530,7 +5530,7 @@ entity_tick(struct cfs_rq *cfs_rq, struc
* validating it and just reschedule.
*/
if (queued) {
- resched_curr_lazy(rq_of(cfs_rq));
+ resched_curr(rq_of(cfs_rq));
return;
}
#endif | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:27 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The clock of the hrtick and deadline timers is known to be CLOCK_MONOTONIC.
No point in looking it up via hrtimer_cb_get_time().
Just use ktime_get() directly.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/sched/core.c | 3 +--
kernel/sched/deadline.c | 2 +-
2 files changed, 2 insertions(+), 3 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -925,7 +925,6 @@ static void __hrtick_start(void *arg)
*/
void hrtick_start(struct rq *rq, u64 delay)
{
- struct hrtimer *timer = &rq->hrtick_timer;
s64 delta;
/*
@@ -933,7 +932,7 @@ void hrtick_start(struct rq *rq, u64 del
* doesn't make sense and can cause timer DoS.
*/
delta = max_t(s64, delay, 10000LL);
- rq->hrtick_time = ktime_add_ns(hrtimer_cb_get_time(timer), delta);
+ rq->hrtick_time = ktime_add_ns(ktime_get(), delta);
if (rq == this_rq())
__hrtick_restart(rq);
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1097,7 +1097,7 @@ static int start_dl_timer(struct sched_d
act = ns_to_ktime(dl_next_period(dl_se));
}
- now = hrtimer_cb_get_time(timer);
+ now = ktime_get();
delta = ktime_to_ns(now) - rq_clock(rq);
act = ktime_add_ns(act, delta); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:32 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
Much like hrtimer_reprogram(), skip programming if the cpu_base is running
the hrtimer interrupt.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1269,6 +1269,14 @@ static int __hrtimer_start_range_ns(stru
}
first = enqueue_hrtimer(timer, new_base, mode);
+
+ /*
+ * If the hrtimer interrupt is running, then it will reevaluate the
+ * clock bases and reprogram the clock event device.
+ */
+ if (new_base->cpu_base->in_hrtirq)
+ return false;
+
if (!force_local) {
/*
* If the current CPU base is online, then the timer is | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:37 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The scheduler evaluates this via hrtimer_is_hres_active() every time it has
to update HRTICK. This needs to follow three pointers, which is expensive.
Provide a static branch based mechanism to avoid that.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer.h | 13 +++++++++----
kernel/time/hrtimer.c | 28 +++++++++++++++++++++++++---
2 files changed, 34 insertions(+), 7 deletions(-)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -153,17 +153,22 @@ static inline int hrtimer_is_hres_active
}
#ifdef CONFIG_HIGH_RES_TIMERS
+extern unsigned int hrtimer_resolution;
struct clock_event_device;
extern void hrtimer_interrupt(struct clock_event_device *dev);
-extern unsigned int hrtimer_resolution;
+extern struct static_key_false hrtimer_highres_enabled_key;
-#else
+static inline bool hrtimer_highres_enabled(void)
+{
+ return static_branch_likely(&hrtimer_highres_enabled_key);
+}
+#else /* CONFIG_HIGH_RES_TIMERS */
#define hrtimer_resolution (unsigned int)LOW_RES_NSEC
-
-#endif
+static inline bool hrtimer_highres_enabled(void) { return false; }
+#endif /* !CONFIG_HIGH_RES_TIMERS */
static inline ktime_t
__hrtimer_expires_remaining_adjusted(const struct hrtimer *timer, ktime_t now)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -126,6 +126,25 @@ static inline bool hrtimer_base_is_onlin
return likely(base->online);
}
+#ifdef CONFIG_HIGH_RES_TIMERS
+DEFINE_STATIC_KEY_FALSE(hrtimer_highres_enabled_key);
+
+static void hrtimer_hres_workfn(struct work_struct *work)
+{
+ static_branch_enable(&hrtimer_highres_enabled_key);
+}
+
+static DECLARE_WORK(hrtimer_hres_work, hrtimer_hres_workfn);
+
+static inline void hrtimer_schedule_hres_work(void)
+{
+ if (!hrtimer_highres_enabled())
+ schedule_work(&hrtimer_hres_work);
+}
+#else
+static inline void hrtimer_schedule_hres_work(void) { }
+#endif
+
/*
* Functions and macros which are different for UP/SMP systems are kept in a
* single place
@@ -649,7 +668,9 @@ static inline ktime_t hrtimer_update_bas
}
/*
- * Is the high resolution mode active ?
+ * Is the high resolution mode active in the CPU base. This cannot use the
+ * static key as the CPUs are switched to high resolution mode
+ * asynchronously.
*/
static inline int hrtimer_hres_active(struct hrtimer_cpu_base *cpu_base)
{
@@ -750,6 +771,7 @@ static void hrtimer_switch_to_hres(void)
tick_setup_sched_timer(true);
/* "Retrigger" the interrupt to get things going */
retrigger_next_event(NULL);
+ hrtimer_schedule_hres_work();
}
#else
@@ -947,11 +969,10 @@ static bool update_needs_ipi(struct hrti
*/
void clock_was_set(unsigned int bases)
{
- struct hrtimer_cpu_base *cpu_base = raw_cpu_ptr(&hrtimer_bases);
cpumask_var_t mask;
int cpu;
- if (!hrtimer_hres_active(cpu_base) && !tick_nohz_is_active())
+ if (!hrtimer_highres_enabled() && !tick_nohz_is_active())
goto out_timerfd;
if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) {
@@ -962,6 +983,7 @@ void clock_was_set(unsigned int bases)
/* Avoid interrupting CPUs if possible */
cpus_read_lock();
for_each_online_cpu(cpu) {
+ struct hrtimer_cpu_base *cpu_base;
unsigned long flags;
cpu_base = &per_cpu(hrtimer_bases, cpu); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:42 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Use the static branch based variant and thereby avoid following three
pointers.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer.h | 6 ------
kernel/sched/sched.h | 37 +++++++++----------------------------
2 files changed, 9 insertions(+), 34 deletions(-)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -146,12 +146,6 @@ static inline ktime_t hrtimer_expires_re
return ktime_sub(timer->node.expires, hrtimer_cb_get_time(timer));
}
-static inline int hrtimer_is_hres_active(struct hrtimer *timer)
-{
- return IS_ENABLED(CONFIG_HIGH_RES_TIMERS) ?
- timer->base->cpu_base->hres_active : 0;
-}
-
#ifdef CONFIG_HIGH_RES_TIMERS
extern unsigned int hrtimer_resolution;
struct clock_event_device;
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3019,25 +3019,19 @@ extern unsigned int sysctl_numa_balancin
* - enabled by features
* - hrtimer is actually high res
*/
-static inline int hrtick_enabled(struct rq *rq)
+static inline bool hrtick_enabled(struct rq *rq)
{
- if (!cpu_active(cpu_of(rq)))
- return 0;
- return hrtimer_is_hres_active(&rq->hrtick_timer);
+ return cpu_active(cpu_of(rq)) && hrtimer_highres_enabled();
}
-static inline int hrtick_enabled_fair(struct rq *rq)
+static inline bool hrtick_enabled_fair(struct rq *rq)
{
- if (!sched_feat(HRTICK))
- return 0;
- return hrtick_enabled(rq);
+ return sched_feat(HRTICK) && hrtick_enabled(rq);
}
-static inline int hrtick_enabled_dl(struct rq *rq)
+static inline bool hrtick_enabled_dl(struct rq *rq)
{
- if (!sched_feat(HRTICK_DL))
- return 0;
- return hrtick_enabled(rq);
+ return sched_feat(HRTICK_DL) && hrtick_enabled(rq);
}
extern void hrtick_start(struct rq *rq, u64 delay);
@@ -3047,22 +3041,9 @@ static inline bool hrtick_active(struct
}
#else /* !CONFIG_SCHED_HRTICK: */
-
-static inline int hrtick_enabled_fair(struct rq *rq)
-{
- return 0;
-}
-
-static inline int hrtick_enabled_dl(struct rq *rq)
-{
- return 0;
-}
-
-static inline int hrtick_enabled(struct rq *rq)
-{
- return 0;
-}
-
+static inline bool hrtick_enabled_fair(struct rq *rq) { return false; }
+static inline bool hrtick_enabled_dl(struct rq *rq) { return false; }
+static inline bool hrtick_enabled(struct rq *rq) { return false; }
#endif /* !CONFIG_SCHED_HRTICK */
#ifndef arch_scale_freq_tick | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:47 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | schedule() provides several mechanisms to update the hrtick timer:
1) When the next task is picked
2) When the balance callbacks are invoked before rq::lock is released
Each of them can result in a first expiring timer and cause a reprogram of
the clock event device.
Solve this by deferring the rearm to the end of schedule() right before
releasing rq::lock by setting a flag on entry which tells hrtick_start() to
cache the runtime constraint in rq::hrtick_delay without touching the timer
itself.
Right before releasing rq::lock evaluate the flags and either rearm or
cancel the hrtick timer.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/sched/core.c | 57 ++++++++++++++++++++++++++++++++++++++++++---------
kernel/sched/sched.h | 2 +
2 files changed, 50 insertions(+), 9 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -872,6 +872,12 @@ void update_rq_clock(struct rq *rq)
* Use HR-timers to deliver accurate preemption points.
*/
+enum {
+ HRTICK_SCHED_NONE = 0,
+ HRTICK_SCHED_DEFER = BIT(1),
+ HRTICK_SCHED_START = BIT(2),
+};
+
static void hrtick_clear(struct rq *rq)
{
if (hrtimer_active(&rq->hrtick_timer))
@@ -932,6 +938,17 @@ void hrtick_start(struct rq *rq, u64 del
* doesn't make sense and can cause timer DoS.
*/
delta = max_t(s64, delay, 10000LL);
+
+ /*
+ * If this is in the middle of schedule() only note the delay
+ * and let hrtick_schedule_exit() deal with it.
+ */
+ if (rq->hrtick_sched) {
+ rq->hrtick_sched |= HRTICK_SCHED_START;
+ rq->hrtick_delay = delta;
+ return;
+ }
+
rq->hrtick_time = ktime_add_ns(ktime_get(), delta);
if (rq == this_rq())
@@ -940,19 +957,40 @@ void hrtick_start(struct rq *rq, u64 del
smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
}
-static void hrtick_rq_init(struct rq *rq)
+static inline void hrtick_schedule_enter(struct rq *rq)
{
- INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
- hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ rq->hrtick_sched = HRTICK_SCHED_DEFER;
}
-#else /* !CONFIG_SCHED_HRTICK: */
-static inline void hrtick_clear(struct rq *rq)
+
+static inline void hrtick_schedule_exit(struct rq *rq)
{
+ if (rq->hrtick_sched & HRTICK_SCHED_START) {
+ rq->hrtick_time = ktime_add_ns(ktime_get(), rq->hrtick_delay);
+ __hrtick_restart(rq);
+ } else if (idle_rq(rq)) {
+ /*
+ * No need for using hrtimer_is_active(). The timer is CPU local
+ * and interrupts are disabled, so the callback cannot be
+ * running and the queued state is valid.
+ */
+ if (hrtimer_is_queued(&rq->hrtick_timer))
+ hrtimer_cancel(&rq->hrtick_timer);
+ }
+
+ rq->hrtick_sched = HRTICK_SCHED_NONE;
}
-static inline void hrtick_rq_init(struct rq *rq)
+static void hrtick_rq_init(struct rq *rq)
{
+ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
+ rq->hrtick_sched = HRTICK_SCHED_NONE;
+ hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
}
+#else /* !CONFIG_SCHED_HRTICK: */
+static inline void hrtick_clear(struct rq *rq) { }
+static inline void hrtick_rq_init(struct rq *rq) { }
+static inline void hrtick_schedule_enter(struct rq *rq) { }
+static inline void hrtick_schedule_exit(struct rq *rq) { }
#endif /* !CONFIG_SCHED_HRTICK */
/*
@@ -5028,6 +5066,7 @@ static inline void finish_lock_switch(st
*/
spin_acquire(&__rq_lockp(rq)->dep_map, 0, 0, _THIS_IP_);
__balance_callbacks(rq, NULL);
+ hrtick_schedule_exit(rq);
raw_spin_rq_unlock_irq(rq);
}
@@ -6781,9 +6820,6 @@ static void __sched notrace __schedule(i
schedule_debug(prev, preempt);
- if (sched_feat(HRTICK) || sched_feat(HRTICK_DL))
- hrtick_clear(rq);
-
klp_sched_try_switch(prev);
local_irq_disable();
@@ -6810,6 +6846,8 @@ static void __sched notrace __schedule(i
rq_lock(rq, &rf);
smp_mb__after_spinlock();
+ hrtick_schedule_enter(rq);
+
/* Promote REQ to ACT */
rq->clock_update_flags <<= 1;
update_rq_clock(rq);
@@ -6911,6 +6949,7 @@ static void __sched notrace __schedule(i
rq_unpin_lock(rq, &rf);
__balance_callbacks(rq, NULL);
+ hrtick_schedule_exit(rq);
raw_spin_rq_unlock_irq(rq);
}
trace_sched_exit_tp(is_switch);
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1285,6 +1285,8 @@ struct rq {
call_single_data_t hrtick_csd;
struct hrtimer hrtick_timer;
ktime_t hrtick_time;
+ ktime_t hrtick_delay;
+ unsigned int hrtick_sched;
#endif
#ifdef CONFIG_SCHEDSTATS | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:52 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Tiny adjustments to the hrtick expiry time below 5 microseconds are just
causing extra work for no real value. Filter them out when restarting the
hrtick.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/sched/core.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -903,12 +903,24 @@ static enum hrtimer_restart hrtick(struc
return HRTIMER_NORESTART;
}
-static void __hrtick_restart(struct rq *rq)
+static inline bool hrtick_needs_rearm(struct hrtimer *timer, ktime_t expires)
+{
+ /*
+ * Queued is false when the timer is not started or currently
+ * running the callback. In both cases, restart. If queued check
+ * whether the expiry time actually changes substantially.
+ */
+ return !hrtimer_is_queued(timer) ||
+ abs(expires - hrtimer_get_expires(timer)) > 5000;
+}
+
+static void hrtick_cond_restart(struct rq *rq)
{
struct hrtimer *timer = &rq->hrtick_timer;
ktime_t time = rq->hrtick_time;
- hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
+ if (hrtick_needs_rearm(timer, time))
+ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
}
/*
@@ -920,7 +932,7 @@ static void __hrtick_start(void *arg)
struct rq_flags rf;
rq_lock(rq, &rf);
- __hrtick_restart(rq);
+ hrtick_cond_restart(rq);
rq_unlock(rq, &rf);
}
@@ -950,9 +962,11 @@ void hrtick_start(struct rq *rq, u64 del
}
rq->hrtick_time = ktime_add_ns(ktime_get(), delta);
+ if (!hrtick_needs_rearm(&rq->hrtick_timer, rq->hrtick_time))
+ return;
if (rq == this_rq())
- __hrtick_restart(rq);
+ hrtimer_start(&rq->hrtick_timer, rq->hrtick_time, HRTIMER_MODE_ABS_PINNED_HARD);
else
smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
}
@@ -966,7 +980,7 @@ static inline void hrtick_schedule_exit(
{
if (rq->hrtick_sched & HRTICK_SCHED_START) {
rq->hrtick_time = ktime_add_ns(ktime_get(), rq->hrtick_delay);
- __hrtick_restart(rq);
+ hrtick_cond_restart(rq);
} else if (idle_rq(rq)) {
/*
* No need for using hrtimer_is_active(). The timer is CPU local | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:35:56 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The hrtick timer is frequently rearmed before expiry and most of the time
the new expiry is past the armed one. As this happens on every context
switch it becomes expensive with scheduling heavy work loads especially in
virtual machines as the "hardware" reprogamming implies a VM exit.
Add a lazy rearm mode flag which skips the reprogamming if:
1) The timer was the first expiring timer before the rearm
2) The new expiry time is farther out than the armed time
This avoids a massive amount of reprogramming operations of the hrtick
timer for the price of eventually taking the alredy armed interrupt for
nothing.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer.h | 8 ++++++++
include/linux/hrtimer_types.h | 3 +++
kernel/time/hrtimer.c | 17 ++++++++++++++++-
3 files changed, 27 insertions(+), 1 deletion(-)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -31,6 +31,13 @@
* soft irq context
* HRTIMER_MODE_HARD - Timer callback function will be executed in
* hard irq context even on PREEMPT_RT.
+ * HRTIMER_MODE_LAZY_REARM - Avoid reprogramming if the timer was the
+ * first expiring timer and is moved into the
+ * future. Special mode for the HRTICK timer to
+ * avoid extensive reprogramming of the hardware,
+ * which is expensive in virtual machines. Risks
+ * a pointless expiry, but that's better than
+ * reprogramming on every context switch,
*/
enum hrtimer_mode {
HRTIMER_MODE_ABS = 0x00,
@@ -38,6 +45,7 @@ enum hrtimer_mode {
HRTIMER_MODE_PINNED = 0x02,
HRTIMER_MODE_SOFT = 0x04,
HRTIMER_MODE_HARD = 0x08,
+ HRTIMER_MODE_LAZY_REARM = 0x10,
HRTIMER_MODE_ABS_PINNED = HRTIMER_MODE_ABS | HRTIMER_MODE_PINNED,
HRTIMER_MODE_REL_PINNED = HRTIMER_MODE_REL | HRTIMER_MODE_PINNED,
--- a/include/linux/hrtimer_types.h
+++ b/include/linux/hrtimer_types.h
@@ -33,6 +33,8 @@ enum hrtimer_restart {
* @is_soft: Set if hrtimer will be expired in soft interrupt context.
* @is_hard: Set if hrtimer will be expired in hard interrupt context
* even on RT.
+ * @is_lazy: Set if the timer is frequently rearmed to avoid updates
+ * of the clock event device
*
* The hrtimer structure must be initialized by hrtimer_setup()
*/
@@ -45,6 +47,7 @@ struct hrtimer {
u8 is_rel;
u8 is_soft;
u8 is_hard;
+ u8 is_lazy;
};
#endif /* _LINUX_HRTIMER_TYPES_H */
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1152,7 +1152,7 @@ static void __remove_hrtimer(struct hrti
* an superfluous call to hrtimer_force_reprogram() on the
* remote cpu later on if the same timer gets enqueued again.
*/
- if (reprogram && timer == cpu_base->next_timer)
+ if (reprogram && timer == cpu_base->next_timer && !timer->is_lazy)
hrtimer_force_reprogram(cpu_base, 1);
}
@@ -1322,6 +1322,20 @@ static int __hrtimer_start_range_ns(stru
}
/*
+ * Special case for the HRTICK timer. It is frequently rearmed and most
+ * of the time moves the expiry into the future. That's expensive in
+ * virtual machines and it's better to take the pointless already armed
+ * interrupt than reprogramming the hardware on every context switch.
+ *
+ * If the new expiry is before the armed time, then reprogramming is
+ * required.
+ */
+ if (timer->is_lazy) {
+ if (new_base->cpu_base->expires_next <= hrtimer_get_expires(timer))
+ return 0;
+ }
+
+ /*
* Timer was forced to stay on the current CPU to avoid
* reprogramming on removal and enqueue. Force reprogram the
* hardware by evaluating the new first expiring timer.
@@ -1675,6 +1689,7 @@ static void __hrtimer_setup(struct hrtim
base += hrtimer_clockid_to_base(clock_id);
timer->is_soft = softtimer;
timer->is_hard = !!(mode & HRTIMER_MODE_HARD);
+ timer->is_lazy = !!(mode & HRTIMER_MODE_LAZY_REARM);
timer->base = &cpu_base->clock_base[base];
timerqueue_init(&timer->node); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:01 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The hrtick timer is frequently rearmed before expiry and most of the time
the new expiry is past the armed one. As this happens on every context
switch it becomes expensive with scheduling heavy work loads especially in
virtual machines as the "hardware" reprogamming implies a VM exit.
hrtimer now provide a lazy rearm mode flag which skips the reprogamming if:
1) The timer was the first expiring timer before the rearm
2) The new expiry time is farther out than the armed time
This avoids a massive amount of reprogramming operations of the hrtick
timer for the price of eventually taking the alredy armed interrupt for
nothing.
Mark the hrtick timer accordingly.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/sched/core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -998,7 +998,8 @@ static void hrtick_rq_init(struct rq *rq
{
INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
rq->hrtick_sched = HRTICK_SCHED_NONE;
- hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL_HARD | HRTIMER_MODE_LAZY_REARM);
}
#else /* !CONFIG_SCHED_HRTICK: */
static inline void hrtick_clear(struct rq *rq) { } | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:06 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The sequence of cancel and start is inefficient. It has to do the timer
lock/unlock twice and in the worst case has to reprogram the underlying
clock event device twice.
The reason why it is done this way is the usage of hrtimer_forward_now(),
which requires the timer to be inactive.
But that can be completely avoided as the forward can be done on a variable
and does not need any of the overrun accounting provided by
hrtimer_forward_now().
Implement a trivial forwarding mechanism and replace the cancel/reprogram
sequence with hrtimer_start(..., new_expiry).
For the non high resolution case the timer is not actually armed, but used
for storage so that code checking for expiry times can unconditially look
it up in the timer. So it is safe for that case to set the new expiry time
directly.
Signed-off-by: Thomas Gleixner <tglx@kernel.org
Cc: Frederic Weisbecker <frederic@kernel.org>
---
kernel/time/tick-sched.c | 27 ++++++++++++++++++++-------
1 file changed, 20 insertions(+), 7 deletions(-)
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -864,19 +864,32 @@ u64 get_cpu_iowait_time_us(int cpu, u64
}
EXPORT_SYMBOL_GPL(get_cpu_iowait_time_us);
+/* Simplified variant of hrtimer_forward_now() */
+static ktime_t tick_forward_now(ktime_t expires, ktime_t now)
+{
+ ktime_t delta = now - expires;
+
+ if (likely(delta < TICK_NSEC))
+ return expires + TICK_NSEC;
+
+ expires += TICK_NSEC * ktime_divns(delta, TICK_NSEC);
+ if (expires > now)
+ return expires;
+ return expires + TICK_NSEC;
+}
+
static void tick_nohz_restart(struct tick_sched *ts, ktime_t now)
{
- hrtimer_cancel(&ts->sched_timer);
- hrtimer_set_expires(&ts->sched_timer, ts->last_tick);
+ ktime_t expires = ts->last_tick;
- /* Forward the time to expire in the future */
- hrtimer_forward(&ts->sched_timer, now, TICK_NSEC);
+ if (now >= expires)
+ expires = tick_forward_now(expires, now);
if (tick_sched_flag_test(ts, TS_FLAG_HIGHRES)) {
- hrtimer_start_expires(&ts->sched_timer,
- HRTIMER_MODE_ABS_PINNED_HARD);
+ hrtimer_start(&ts->sched_timer, expires, HRTIMER_MODE_ABS_PINNED_HARD);
} else {
- tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1);
+ hrtimer_set_expires(&ts->sched_timer, expires);
+ tick_program_event(expires, 1);
}
/* | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:10 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The only real usecase for this is the hrtimer based broadcast device.
No point in using two different feature flags for this.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/clockchips.h | 1 -
kernel/time/clockevents.c | 4 ++--
kernel/time/tick-broadcast-hrtimer.c | 1 -
3 files changed, 2 insertions(+), 4 deletions(-)
--- a/include/linux/clockchips.h
+++ b/include/linux/clockchips.h
@@ -45,7 +45,6 @@ enum clock_event_state {
*/
# define CLOCK_EVT_FEAT_PERIODIC 0x000001
# define CLOCK_EVT_FEAT_ONESHOT 0x000002
-# define CLOCK_EVT_FEAT_KTIME 0x000004
/*
* x86(64) specific (mis)features:
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -319,8 +319,8 @@ int clockevents_program_event(struct clo
WARN_ONCE(!clockevent_state_oneshot(dev), "Current state: %d\n",
clockevent_get_state(dev));
- /* Shortcut for clockevent devices that can deal with ktime. */
- if (dev->features & CLOCK_EVT_FEAT_KTIME)
+ /* ktime_t based reprogramming for the broadcast hrtimer device */
+ if (unlikely(dev->features & CLOCK_EVT_FEAT_HRTIMER))
return dev->set_next_ktime(expires, dev);
delta = ktime_to_ns(ktime_sub(expires, ktime_get()));
--- a/kernel/time/tick-broadcast-hrtimer.c
+++ b/kernel/time/tick-broadcast-hrtimer.c
@@ -78,7 +78,6 @@ static struct clock_event_device ce_broa
.set_state_shutdown = bc_shutdown,
.set_next_ktime = bc_set_next,
.features = CLOCK_EVT_FEAT_ONESHOT |
- CLOCK_EVT_FEAT_KTIME |
CLOCK_EVT_FEAT_HRTIMER,
.rating = 0,
.bound_on = -1, | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:15 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | On some architectures clocksource::read() boils down to a single
instruction, so the indirect function call is just a massive overhead
especially with speculative execution mitigations in effect.
Allow architectures to enable conditional inlining of that read to avoid
that by:
- providing a static branch to switch to the inlined variant
- disabling the branch before clocksource changes
- enabling the branch after a clocksource change, when the clocksource
indicates in a feature flag that it is the one which provides the
inlined variant
This is intentionally not a static call as that would only remove the
indirect call, but not the rest of the overhead.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/clocksource.h | 2 +
kernel/time/Kconfig | 3 +
kernel/time/timekeeping.c | 74 ++++++++++++++++++++++++++++++++------------
3 files changed, 60 insertions(+), 19 deletions(-)
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -149,6 +149,8 @@ struct clocksource {
#define CLOCK_SOURCE_SUSPEND_NONSTOP 0x80
#define CLOCK_SOURCE_RESELECT 0x100
#define CLOCK_SOURCE_VERIFY_PERCPU 0x200
+#define CLOCK_SOURCE_CAN_INLINE_READ 0x400
+
/* simplify initialization of mask field */
#define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0)
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -17,6 +17,9 @@ config ARCH_CLOCKSOURCE_DATA
config ARCH_CLOCKSOURCE_INIT
bool
+config ARCH_WANTS_CLOCKSOURCE_READ_INLINE
+ bool
+
# Timekeeping vsyscall support
config GENERIC_TIME_VSYSCALL
bool
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -3,34 +3,30 @@
* Kernel timekeeping code and accessor functions. Based on code from
* timer.c, moved in commit 8524070b7982.
*/
-#include <linux/timekeeper_internal.h>
-#include <linux/module.h>
-#include <linux/interrupt.h>
+#include <linux/audit.h>
+#include <linux/clocksource.h>
+#include <linux/compiler.h>
+#include <linux/jiffies.h>
#include <linux/kobject.h>
-#include <linux/percpu.h>
-#include <linux/init.h>
-#include <linux/mm.h>
+#include <linux/module.h>
#include <linux/nmi.h>
-#include <linux/sched.h>
-#include <linux/sched/loadavg.h>
+#include <linux/pvclock_gtod.h>
+#include <linux/random.h>
#include <linux/sched/clock.h>
+#include <linux/sched/loadavg.h>
+#include <linux/static_key.h>
+#include <linux/stop_machine.h>
#include <linux/syscore_ops.h>
-#include <linux/clocksource.h>
-#include <linux/jiffies.h>
+#include <linux/tick.h>
#include <linux/time.h>
#include <linux/timex.h>
-#include <linux/tick.h>
-#include <linux/stop_machine.h>
-#include <linux/pvclock_gtod.h>
-#include <linux/compiler.h>
-#include <linux/audit.h>
-#include <linux/random.h>
+#include <linux/timekeeper_internal.h>
#include <vdso/auxclock.h>
#include "tick-internal.h"
-#include "ntp_internal.h"
#include "timekeeping_internal.h"
+#include "ntp_internal.h"
#define TK_CLEAR_NTP (1 << 0)
#define TK_CLOCK_WAS_SET (1 << 1)
@@ -275,6 +271,11 @@ static inline void tk_update_sleep_time(
tk->monotonic_to_boot = ktime_to_timespec64(tk->offs_boot);
}
+#ifdef CONFIG_ARCH_WANTS_CLOCKSOURCE_READ_INLINE
+#include <asm/clock_inlined.h>
+
+static DEFINE_STATIC_KEY_FALSE(clocksource_read_inlined);
+
/*
* tk_clock_read - atomic clocksource read() helper
*
@@ -288,13 +289,36 @@ static inline void tk_update_sleep_time(
* a read of the fast-timekeeper tkrs (which is protected by its own locking
* and update logic).
*/
-static inline u64 tk_clock_read(const struct tk_read_base *tkr)
+static __always_inline u64 tk_clock_read(const struct tk_read_base *tkr)
{
struct clocksource *clock = READ_ONCE(tkr->clock);
+ if (static_branch_likely(&clocksource_read_inlined))
+ return arch_inlined_clocksource_read(clock);
+
return clock->read(clock);
}
+static inline void clocksource_disable_inline_read(void)
+{
+ static_branch_disable(&clocksource_read_inlined);
+}
+
+static inline void clocksource_enable_inline_read(void)
+{
+ static_branch_enable(&clocksource_read_inlined);
+}
+#else
+static __always_inline u64 tk_clock_read(const struct tk_read_base *tkr)
+{
+ struct clocksource *clock = READ_ONCE(tkr->clock);
+
+ return clock->read(clock);
+}
+static inline void clocksource_disable_inline_read(void) { }
+static inline void clocksource_enable_inline_read(void) { }
+#endif
+
/**
* tk_setup_internals - Set up internals to use clocksource clock.
*
@@ -375,7 +399,7 @@ static noinline u64 delta_to_ns_safe(con
return mul_u64_u32_add_u64_shr(delta, tkr->mult, tkr->xtime_nsec, tkr->shift);
}
-static inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 cycles)
+static __always_inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 cycles)
{
/* Calculate the delta since the last update_wall_time() */
u64 mask = tkr->mask, delta = (cycles - tkr->cycle_last) & mask;
@@ -1631,7 +1655,19 @@ int timekeeping_notify(struct clocksourc
if (tk->tkr_mono.clock == clock)
return 0;
+
+ /* Disable inlined reads accross the clocksource switch */
+ clocksource_disable_inline_read();
+
stop_machine(change_clocksource, clock, NULL);
+
+ /*
+ * If the clocksource has been selected and supports inlined reads
+ * enable the branch.
+ */
+ if (tk->tkr_mono.clock == clock && clock->flags & CLOCK_SOURCE_CAN_INLINE_READ)
+ clocksource_enable_inline_read();
+
tick_clock_notify();
return tk->tkr_mono.clock == clock ? 0 : -1;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:20 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Avoid the overhead of the indirect call for a single instruction to read
the TSC.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/clock_inlined.h | 14 ++++++++++++++
arch/x86/kernel/tsc.c | 1 +
3 files changed, 16 insertions(+)
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -141,6 +141,7 @@ config X86
select ARCH_USE_SYM_ANNOTATIONS
select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
select ARCH_WANT_DEFAULT_BPF_JIT if X86_64
+ select ARCH_WANTS_CLOCKSOURCE_READ_INLINE if X86_64
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_WANTS_NO_INSTR
select ARCH_WANT_GENERAL_HUGETLB
--- /dev/null
+++ b/arch/x86/include/asm/clock_inlined.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_CLOCK_INLINED_H
+#define _ASM_X86_CLOCK_INLINED_H
+
+#include <asm/tsc.h>
+
+struct clocksource;
+
+static __always_inline u64 arch_inlined_clocksource_read(struct clocksource *cs)
+{
+ return (u64)rdtsc_ordered();
+}
+
+#endif
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1201,6 +1201,7 @@ static struct clocksource clocksource_ts
.mask = CLOCKSOURCE_MASK(64),
.flags = CLOCK_SOURCE_IS_CONTINUOUS |
CLOCK_SOURCE_VALID_FOR_HRES |
+ CLOCK_SOURCE_CAN_INLINE_READ |
CLOCK_SOURCE_MUST_VERIFY |
CLOCK_SOURCE_VERIFY_PERCPU,
.id = CSID_X86_TSC, | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:24 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | lapic_next_deadline() contains a fence before the TSC read and the write to
the TSC_DEADLINE MSR with a content free and therefore useless comment:
/* This MSR is special and need a special fence: */
The MSR is not really special. It is just not a serializing MSR, but that
does not matter at all in this context as all of these operations are
strictly CPU local.
The only thing the fence prevents is that the RDTSC is speculated ahead,
but that's not really relevant as the delta is calculated way before based
on a previous TSC read and therefore inaccurate by definition.
So removing the fence is just making it slightly more inaccurate in the
worst case, but that is irrelevant as it's way below the actual system
immanent latencies and variations.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
arch/x86/kernel/apic/apic.c | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -412,22 +412,20 @@ EXPORT_SYMBOL_GPL(setup_APIC_eilvt);
/*
* Program the next event, relative to now
*/
-static int lapic_next_event(unsigned long delta,
- struct clock_event_device *evt)
+static int lapic_next_event(unsigned long delta, struct clock_event_device *evt)
{
apic_write(APIC_TMICT, delta);
return 0;
}
-static int lapic_next_deadline(unsigned long delta,
- struct clock_event_device *evt)
+static int lapic_next_deadline(unsigned long delta, struct clock_event_device *evt)
{
- u64 tsc;
+ /*
+ * There is no weak_wrmsr_fence() required here as all of this is purely
+ * CPU local. Avoid the [ml]fence overhead.
+ */
+ u64 tsc = rdtsc();
- /* This MSR is special and need a special fence: */
- weak_wrmsr_fence();
-
- tsc = rdtsc();
wrmsrq(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR));
return 0;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:29 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | XEN PV does not emulate the TSC deadline timer, so the PVOPS indirection
for writing the deadline MSR can be avoided completely.
Use native_wrmsrq() instead.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
arch/x86/kernel/apic/apic.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -426,7 +426,7 @@ static int lapic_next_deadline(unsigned
*/
u64 tsc = rdtsc();
- wrmsrq(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR));
+ native_wrmsrq(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR));
return 0;
}
@@ -450,7 +450,7 @@ static int lapic_timer_shutdown(struct c
* the timer _and_ zero the counter registers:
*/
if (v & APIC_LVT_TIMER_TSCDEADLINE)
- wrmsrq(MSR_IA32_TSC_DEADLINE, 0);
+ native_wrmsrq(MSR_IA32_TSC_DEADLINE, 0);
else
apic_write(APIC_TMICT, 0);
@@ -547,6 +547,11 @@ static __init bool apic_validate_deadlin
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
return false;
+
+ /* XEN_PV does not support it, but be paranoia about it */
+ if (boot_cpu_has(X86_FEATURE_XENPV))
+ goto clear;
+
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return true;
@@ -559,9 +564,11 @@ static __init bool apic_validate_deadlin
if (boot_cpu_data.microcode >= rev)
return true;
- setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; "
"please update microcode to version: 0x%x (or later)\n", rev);
+
+clear:
+ setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
return false;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:34 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Some architectures have clockevent devices which are coupled to the system
clocksource by implementing a less than or equal comparator which compares
the programmed absolute expiry time against the underlying time
counter. Well known examples are TSC/TSC deadline timer and the S390 TOD
clocksource/comparator.
While the concept is nice it has some downsides:
1) The clockevents core code is strictly based on relative expiry times
as that's the most common case for clockevent device hardware. That
requires to convert the absolute expiry time provided by the caller
(hrtimers, NOHZ code) to a relative expiry time by reading and
substracting the current time.
The clockevent::set_next_event() callback must then read the counter
again to convert the relative expiry back into a absolute one.
2) The conversion factors from nanoseconds to counter clock cycles are
set up when the clockevent is registered. When NTP applies corrections
then the clockevent conversion factors can deviate from the
clocksource conversion substantially which either results in timers
firing late or in the worst case early. The early expiry then needs to
do a reprogam with a short delta.
In most cases this is papered over by the fact that the read in the
set_next_event() callback happens after the read which is used to
calculate the delta. So the tendency is that timers expire mostly
late.
All of this can be avoided by providing support for these devices in the
core code:
1) The timekeeping core keeps track of the last update to the clocksource
by storing the base nanoseconds and the corresponding clocksource
counter value. That's used to keep the conversion math for reading the
time within 64-bit in the common case.
This information can be used to avoid both reads of the underlying
clocksource in the clockevents reprogramming path:
delta = expiry - base_ns;
cycles = base_cycles + ((delta * clockevent::mult) >> clockevent::shift);
The resulting cycles value can be directly used to program the
comparator.
2) As #1 does not longer provide the "compensation" through the second
read the deviation of the clocksource and clockevent conversions
caused by NTP become more prominent.
This can be cured by letting the timekeeping core compute and store
the reverse conversion factors when the clocksource cycles to
nanoseconds factors are modified by NTP:
CS::MULT (1 << NS_TO_CYC_SHIFT)
--------------- = ----------------------
(1 << CS:SHIFT) NS_TO_CYC_MULT
Ergo: NS_TO_CYC_MULT = (1 << (CS::SHIFT + NS_TO_CYC_SHIFT)) / CS::MULT
The NS_TO_CYC_SHIFT value is calculated when the clocksource is
installed so that it aims for a one hour maximum sleep time.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/clocksource.h | 1
include/linux/timekeeper_internal.h | 8 ++
kernel/time/Kconfig | 3
kernel/time/timekeeping.c | 110 ++++++++++++++++++++++++++++++++++++
kernel/time/timekeeping.h | 2
5 files changed, 124 insertions(+)
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -150,6 +150,7 @@ struct clocksource {
#define CLOCK_SOURCE_RESELECT 0x100
#define CLOCK_SOURCE_VERIFY_PERCPU 0x200
#define CLOCK_SOURCE_CAN_INLINE_READ 0x400
+#define CLOCK_SOURCE_HAS_COUPLED_CLOCK_EVENT 0x800
/* simplify initialization of mask field */
#define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0)
--- a/include/linux/timekeeper_internal.h
+++ b/include/linux/timekeeper_internal.h
@@ -72,6 +72,10 @@ struct tk_read_base {
* @id: The timekeeper ID
* @tkr_raw: The readout base structure for CLOCK_MONOTONIC_RAW
* @raw_sec: CLOCK_MONOTONIC_RAW time in seconds
+ * @cs_id: The ID of the current clocksource
+ * @cs_ns_to_cyc_mult: Multiplicator for nanoseconds to cycles conversion
+ * @cs_ns_to_cyc_shift: Shift value for nanoseconds to cycles conversion
+ * @cs_ns_to_cyc_maxns: Maximum nanoseconds to cyles conversion range
* @clock_was_set_seq: The sequence number of clock was set events
* @cs_was_changed_seq: The sequence number of clocksource change events
* @clock_valid: Indicator for valid clock
@@ -159,6 +163,10 @@ struct timekeeper {
u64 raw_sec;
/* Cachline 3 and 4 (timekeeping internal variables): */
+ enum clocksource_ids cs_id;
+ u32 cs_ns_to_cyc_mult;
+ u32 cs_ns_to_cyc_shift;
+ u64 cs_ns_to_cyc_maxns;
unsigned int clock_was_set_seq;
u8 cs_was_changed_seq;
u8 clock_valid;
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -47,6 +47,9 @@ config GENERIC_CLOCKEVENTS_BROADCAST_IDL
config GENERIC_CLOCKEVENTS_MIN_ADJUST
bool
+config GENERIC_CLOCKEVENTS_COUPLED
+ bool
+
# Generic update of CMOS clock
config GENERIC_CMOS_UPDATE
bool
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -391,6 +391,20 @@ static void tk_setup_internals(struct ti
tk->tkr_raw.mult = clock->mult;
tk->ntp_err_mult = 0;
tk->skip_second_overflow = 0;
+
+ tk->cs_id = clock->id;
+
+ /* Coupled clockevent data */
+ if (IS_ENABLED(CONFIG_GENERIC_CLOCKEVENTS_COUPLED) &&
+ clock->flags & CLOCK_SOURCE_HAS_COUPLED_CLOCK_EVENT) {
+ /*
+ * Aim for an one hour maximum delta and use KHz to handle
+ * clocksources with a frequency above 4GHz correctly as
+ * the frequency argument of clocks_calc_mult_shift() is u32.
+ */
+ clocks_calc_mult_shift(&tk->cs_ns_to_cyc_mult, &tk->cs_ns_to_cyc_shift,
+ NSEC_PER_MSEC, clock->freq_khz, 3600 * 1000);
+ }
}
/* Timekeeper helper functions. */
@@ -720,6 +734,36 @@ static inline void tk_update_ktime_data(
tk->tkr_raw.base = ns_to_ktime(tk->raw_sec * NSEC_PER_SEC);
}
+static inline void tk_update_ns_to_cyc(struct timekeeper *tks, struct timekeeper *tkc)
+{
+ struct tk_read_base *tkrs = &tks->tkr_mono;
+ struct tk_read_base *tkrc = &tkc->tkr_mono;
+ unsigned int shift;
+
+ if (!IS_ENABLED(CONFIG_GENERIC_CLOCKEVENTS_COUPLED) ||
+ !(tkrs->clock->flags & CLOCK_SOURCE_HAS_COUPLED_CLOCK_EVENT))
+ return;
+
+ if (tkrs->mult == tkrc->mult && tkrs->shift == tkrc->shift)
+ return;
+ /*
+ * The conversion math is simple:
+ *
+ * CS::MULT (1 << NS_TO_CYC_SHIFT)
+ * --------------- = ----------------------
+ * (1 << CS:SHIFT) NS_TO_CYC_MULT
+ *
+ * Ergo:
+ *
+ * NS_TO_CYC_MULT = (1 << (CS::SHIFT + NS_TO_CYC_SHIFT)) / CS::MULT
+ *
+ * NS_TO_CYC_SHIFT has been set up in tk_setup_internals()
+ */
+ shift = tkrs->shift + tks->cs_ns_to_cyc_shift;
+ tks->cs_ns_to_cyc_mult = (u32)div_u64(1ULL << shift, tkrs->mult);
+ tks->cs_ns_to_cyc_maxns = div_u64(tkrs->clock->mask, tks->cs_ns_to_cyc_mult);
+}
+
/*
* Restore the shadow timekeeper from the real timekeeper.
*/
@@ -754,6 +798,7 @@ static void timekeeping_update_from_shad
tk->tkr_mono.base_real = tk->tkr_mono.base + tk->offs_real;
if (tk->id == TIMEKEEPER_CORE) {
+ tk_update_ns_to_cyc(tk, &tkd->timekeeper);
update_vsyscall(tk);
update_pvclock_gtod(tk, action & TK_CLOCK_WAS_SET);
@@ -808,6 +853,71 @@ static void timekeeping_forward_now(stru
tk_update_coarse_nsecs(tk);
}
+/*
+ * ktime_expiry_to_cycles - Convert a expiry time to clocksource cycles
+ * @id: Clocksource ID which is required for validity
+ * @expires_ns: Absolute CLOCK_MONOTONIC expiry time (nsecs) to be converted
+ * @cycles: Pointer to storage for corresponding absolute cycles value
+ *
+ * Convert a CLOCK_MONOTONIC based absolute expiry time to a cycles value
+ * based on the correlated clocksource of the clockevent device by using
+ * the base nanoseconds and cycles values of the last timekeeper update and
+ * converting the delta between @expires_ns and base nanoseconds to cycles.
+ *
+ * This only works for clockevent devices which are using a less than or
+ * equal comparator against the clocksource.
+ *
+ * Utilizing this avoids two clocksource reads for such devices, the
+ * ktime_get() in clockevents_program_event() to calculate the delta expiry
+ * value and the readout in the device::set_next_event() callback to
+ * convert the delta back to a absolute comparator value.
+ *
+ * Returns: True if @id matches the current clocksource ID, false otherwise
+ */
+bool ktime_expiry_to_cycles(enum clocksource_ids id, ktime_t expires_ns, u64 *cycles)
+{
+ struct timekeeper *tk = &tk_core.timekeeper;
+ struct tk_read_base *tkrm = &tk->tkr_mono;
+ ktime_t base_ns, delta_ns, max_ns;
+ u64 base_cycles, delta_cycles;
+ unsigned int seq;
+ u32 mult, shift;
+
+ /*
+ * Racy check to avoid the seqcount overhead when ID does not match. If
+ * the relevant clocksource is installed concurrently, then this will
+ * just delay the switch over to this mechanism until the next event is
+ * programmed. If the ID is not matching the clock events code will use
+ * the regular relative set_next_event() callback as before.
+ */
+ if (data_race(tk->cs_id) != id)
+ return false;
+
+ do {
+ seq = read_seqcount_begin(&tk_core.seq);
+
+ if (tk->cs_id != id)
+ return false;
+
+ base_cycles = tkrm->cycle_last;
+ base_ns = tkrm->base + (tkrm->xtime_nsec >> tkrm->shift);
+
+ mult = tk->cs_ns_to_cyc_mult;
+ shift = tk->cs_ns_to_cyc_shift;
+ max_ns = tk->cs_ns_to_cyc_maxns;
+
+ } while (read_seqcount_retry(&tk_core.seq, seq));
+
+ /* Prevent negative deltas and multiplication overflows */
+ delta_ns = min(expires_ns - base_ns, max_ns);
+ delta_ns = max(delta_ns, 0);
+
+ /* Convert to cycles */
+ delta_cycles = ((u64)delta_ns * mult) >> shift;
+ *cycles = base_cycles + delta_cycles;
+ return true;
+}
+
/**
* ktime_get_real_ts64 - Returns the time of day in a timespec64.
* @ts: pointer to the timespec to be set
--- a/kernel/time/timekeeping.h
+++ b/kernel/time/timekeeping.h
@@ -9,6 +9,8 @@ extern ktime_t ktime_get_update_offsets_
ktime_t *offs_boot,
ktime_t *offs_tai);
+bool ktime_expiry_to_cycles(enum clocksource_ids id, ktime_t expires_ns, u64 *cycles);
+
extern int timekeeping_valid_for_hres(void);
extern u64 timekeeping_max_deferment(void);
extern void timekeeping_warp_clock(void); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:40 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Some clockevent devices are coupled to the system clocksource by
implementing a less than or equal comparator which compares the programmed
absolute expiry time against the underlying time counter.
The timekeeping core provides a function to convert and absolute
CLOCK_MONOTONIC based expiry time to a absolute clock cycles time which can
be directly fed into the comparator. That spares two time reads in the next
event progamming path, one to convert the absolute nanoseconds time to a
delta value and the other to convert the delta value back to a absolute
time value suitable for the comparator.
Provide a new clocksource callback which takes the absolute cycle value and
wire it up in clockevents_program_event(). Similar to clocksources allow
architectures to inline the rearm operation.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/clockchips.h | 7 +++++--
kernel/time/Kconfig | 4 ++++
kernel/time/clockevents.c | 44 +++++++++++++++++++++++++++++++++++++++-----
3 files changed, 48 insertions(+), 7 deletions(-)
--- a/include/linux/clockchips.h
+++ b/include/linux/clockchips.h
@@ -43,8 +43,9 @@ enum clock_event_state {
/*
* Clock event features
*/
-# define CLOCK_EVT_FEAT_PERIODIC 0x000001
-# define CLOCK_EVT_FEAT_ONESHOT 0x000002
+# define CLOCK_EVT_FEAT_PERIODIC 0x000001
+# define CLOCK_EVT_FEAT_ONESHOT 0x000002
+# define CLOCK_EVT_FEAT_CLOCKSOURCE_COUPLED 0x000004
/*
* x86(64) specific (mis)features:
@@ -100,6 +101,7 @@ struct clock_event_device {
void (*event_handler)(struct clock_event_device *);
int (*set_next_event)(unsigned long evt, struct clock_event_device *);
int (*set_next_ktime)(ktime_t expires, struct clock_event_device *);
+ void (*set_next_coupled)(u64 cycles, struct clock_event_device *);
ktime_t next_event;
u64 max_delta_ns;
u64 min_delta_ns;
@@ -107,6 +109,7 @@ struct clock_event_device {
u32 shift;
enum clock_event_state state_use_accessors;
unsigned int features;
+ enum clocksource_ids cs_id;
unsigned long retries;
int (*set_state_periodic)(struct clock_event_device *);
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -50,6 +50,10 @@ config GENERIC_CLOCKEVENTS_MIN_ADJUST
config GENERIC_CLOCKEVENTS_COUPLED
bool
+config GENERIC_CLOCKEVENTS_COUPLED_INLINE
+ select GENERIC_CLOCKEVENTS_COUPLED
+ bool
+
# Generic update of CMOS clock
config GENERIC_CMOS_UPDATE
bool
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -292,6 +292,38 @@ static int clockevents_program_min_delta
#endif /* CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST */
+#ifdef CONFIG_GENERIC_CLOCKEVENTS_COUPLED
+#ifdef CONFIG_GENERIC_CLOCKEVENTS_COUPLED_INLINE
+#include <asm/clock_inlined.h>
+#else
+static __always_inline void
+arch_inlined_clockevent_set_next_coupled(u64 u64 cycles, struct clock_event_device *dev) { }
+#endif
+
+static inline bool clockevent_set_next_coupled(struct clock_event_device *dev, ktime_t expires)
+{
+ u64 cycles;
+
+ if (unlikely(!(dev->features & CLOCK_EVT_FEAT_CLOCKSOURCE_COUPLED)))
+ return false;
+
+ if (unlikely(!ktime_expiry_to_cycles(dev->cs_id, expires, &cycles)))
+ return false;
+
+ if (IS_ENABLED(CONFIG_GENERIC_CLOCKEVENTS_COUPLED_INLINE))
+ arch_inlined_clockevent_set_next_coupled(cycles, dev);
+ else
+ dev->set_next_coupled(cycles, dev);
+ return true;
+}
+
+#else
+static inline bool clockevent_set_next_coupled(struct clock_event_device *dev, ktime_t expires)
+{
+ return false;
+}
+#endif
+
/**
* clockevents_program_event - Reprogram the clock event device.
* @dev: device to program
@@ -300,11 +332,10 @@ static int clockevents_program_min_delta
*
* Returns 0 on success, -ETIME when the event is in the past.
*/
-int clockevents_program_event(struct clock_event_device *dev, ktime_t expires,
- bool force)
+int clockevents_program_event(struct clock_event_device *dev, ktime_t expires, bool force)
{
- unsigned long long clc;
int64_t delta;
+ u64 cycles;
int rc;
if (WARN_ON_ONCE(expires < 0))
@@ -323,6 +354,9 @@ int clockevents_program_event(struct clo
if (unlikely(dev->features & CLOCK_EVT_FEAT_HRTIMER))
return dev->set_next_ktime(expires, dev);
+ if (likely(clockevent_set_next_coupled(dev, expires)))
+ return 0;
+
delta = ktime_to_ns(ktime_sub(expires, ktime_get()));
if (delta <= 0)
return force ? clockevents_program_min_delta(dev) : -ETIME;
@@ -330,8 +364,8 @@ int clockevents_program_event(struct clo
delta = min(delta, (int64_t) dev->max_delta_ns);
delta = max(delta, (int64_t) dev->min_delta_ns);
- clc = ((unsigned long long) delta * dev->mult) >> dev->shift;
- rc = dev->set_next_event((unsigned long) clc, dev);
+ cycles = ((u64)delta * dev->mult) >> dev->shift;
+ rc = dev->set_next_event((unsigned long) cycles, dev);
return (rc && force) ? clockevents_program_min_delta(dev) : rc;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:45 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The TSC deadline timer is directly coupled to the TSC and setting the next
deadline is tedious as the clockevents core code converts the
CLOCK_MONOTONIC based absolute expiry time to a relative expiry by reading
the current time from the TSC. It converts that delta to cycles and hands
the result to lapic_next_deadline(), which then has read to the TSC and add
the delta to program the timer.
The core code now supports coupled clock event devices and can provide the
expiry time in TSC cycles directly without reading the TSC at all.
This obviouly works only when the TSC is the current clocksource, but
that's the default for all modern CPUs which implement the TSC deadline
timer. If the TSC is not the current clocksource (e.g. early boot) then the
core code falls back to the relative set_next_event() callback as before.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Cc: x86@kernel.org
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/clock_inlined.h | 8 ++++++++
arch/x86/kernel/apic/apic.c | 12 ++++++------
arch/x86/kernel/tsc.c | 3 ++-
4 files changed, 17 insertions(+), 7 deletions(-)
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -164,6 +164,7 @@ config X86
select EDAC_SUPPORT
select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC)
select GENERIC_CLOCKEVENTS_BROADCAST_IDLE if GENERIC_CLOCKEVENTS_BROADCAST
+ select GENERIC_CLOCKEVENTS_COUPLED_INLINE if X86_64
select GENERIC_CLOCKEVENTS_MIN_ADJUST
select GENERIC_CMOS_UPDATE
select GENERIC_CPU_AUTOPROBE
--- a/arch/x86/include/asm/clock_inlined.h
+++ b/arch/x86/include/asm/clock_inlined.h
@@ -11,4 +11,12 @@ static __always_inline u64 arch_inlined_
return (u64)rdtsc_ordered();
}
+struct clock_event_device;
+
+static __always_inline void
+arch_inlined_clockevent_set_next_coupled(u64 cycles, struct clock_event_device *evt)
+{
+ native_wrmsrq(MSR_IA32_TSC_DEADLINE, cycles);
+}
+
#endif
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -591,14 +591,14 @@ static void setup_APIC_timer(void)
if (this_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER)) {
levt->name = "lapic-deadline";
- levt->features &= ~(CLOCK_EVT_FEAT_PERIODIC |
- CLOCK_EVT_FEAT_DUMMY);
+ levt->features &= ~(CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_DUMMY);
+ levt->features |= CLOCK_EVT_FEAT_CLOCKSOURCE_COUPLED;
+ levt->cs_id = CSID_X86_TSC;
levt->set_next_event = lapic_next_deadline;
- clockevents_config_and_register(levt,
- tsc_khz * (1000 / TSC_DIVISOR),
- 0xF, ~0UL);
- } else
+ clockevents_config_and_register(levt, tsc_khz * (1000 / TSC_DIVISOR), 0xF, ~0UL);
+ } else {
clockevents_register_device(levt);
+ }
apic_update_vector(smp_processor_id(), LOCAL_TIMER_VECTOR, true);
}
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1203,7 +1203,8 @@ static struct clocksource clocksource_ts
CLOCK_SOURCE_VALID_FOR_HRES |
CLOCK_SOURCE_CAN_INLINE_READ |
CLOCK_SOURCE_MUST_VERIFY |
- CLOCK_SOURCE_VERIFY_PERCPU,
+ CLOCK_SOURCE_VERIFY_PERCPU |
+ CLOCK_SOURCE_HAS_COUPLED_CLOCK_EVENT,
.id = CSID_X86_TSC,
.vdso_clock_mode = VDSO_CLOCKMODE_TSC,
.enable = tsc_cs_enable, | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:49 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The debug object coverage in hrtimer_start_range_ns() happens too late to
do anything useful. Implement the init assert assertion part and invoke
that early in hrtimer_start_range_ns().
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 43 ++++++++++++++++++++++++++++++++++++++-----
1 file changed, 38 insertions(+), 5 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -441,12 +441,37 @@ static bool hrtimer_fixup_free(void *add
}
}
+/* Stub timer callback for improperly used timers. */
+static enum hrtimer_restart stub_timer(struct hrtimer *unused)
+{
+ WARN_ON_ONCE(1);
+ return HRTIMER_NORESTART;
+}
+
+/*
+ * hrtimer_fixup_assert_init is called when:
+ * - an untracked/uninit-ed object is found
+ */
+static bool hrtimer_fixup_assert_init(void *addr, enum debug_obj_state state)
+{
+ struct hrtimer *timer = addr;
+
+ switch (state) {
+ case ODEBUG_STATE_NOTAVAILABLE:
+ hrtimer_setup(timer, stub_timer, CLOCK_MONOTONIC, 0);
+ return true;
+ default:
+ return false;
+ }
+}
+
static const struct debug_obj_descr hrtimer_debug_descr = {
- .name = "hrtimer",
- .debug_hint = hrtimer_debug_hint,
- .fixup_init = hrtimer_fixup_init,
- .fixup_activate = hrtimer_fixup_activate,
- .fixup_free = hrtimer_fixup_free,
+ .name = "hrtimer",
+ .debug_hint = hrtimer_debug_hint,
+ .fixup_init = hrtimer_fixup_init,
+ .fixup_activate = hrtimer_fixup_activate,
+ .fixup_free = hrtimer_fixup_free,
+ .fixup_assert_init = hrtimer_fixup_assert_init,
};
static inline void debug_hrtimer_init(struct hrtimer *timer)
@@ -470,6 +495,11 @@ static inline void debug_hrtimer_deactiv
debug_object_deactivate(timer, &hrtimer_debug_descr);
}
+static inline void debug_hrtimer_assert_init(struct hrtimer *timer)
+{
+ debug_object_assert_init(timer, &hrtimer_debug_descr);
+}
+
void destroy_hrtimer_on_stack(struct hrtimer *timer)
{
debug_object_free(timer, &hrtimer_debug_descr);
@@ -483,6 +513,7 @@ static inline void debug_hrtimer_init_on
static inline void debug_hrtimer_activate(struct hrtimer *timer,
enum hrtimer_mode mode) { }
static inline void debug_hrtimer_deactivate(struct hrtimer *timer) { }
+static inline void debug_hrtimer_assert_init(struct hrtimer *timer) { }
#endif
static inline void debug_setup(struct hrtimer *timer, clockid_t clockid, enum hrtimer_mode mode)
@@ -1359,6 +1390,8 @@ void hrtimer_start_range_ns(struct hrtim
struct hrtimer_clock_base *base;
unsigned long flags;
+ debug_hrtimer_assert_init(timer);
+
/*
* Check whether the HRTIMER_MODE_SOFT bit and hrtimer.is_soft
* match on CONFIG_PREEMPT_RT = n. With PREEMPT_RT check the hard | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:54 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | hrtimer_start() when invoked with an already armed timer traces like:
<comm>-.. [032] d.h2. 5.002263: hrtimer_cancel: hrtimer= ....
<comm>-.. [032] d.h1. 5.002263: hrtimer_start: hrtimer= ....
Which is incorrect as the timer doesn't get canceled. Just the expiry time
changes. The internal dequeue operation which is required for that is not
really interesting for trace analysis. But it makes it tedious to keep real
cancellations and the above case apart.
Remove the cancel tracing in hrtimer_start() and add a 'was_armed'
indicator to the hrtimer start tracepoint, which clearly indicates what the
state of the hrtimer is when hrtimer_start() is invoked:
<comm>-.. [032] d.h1. 6.200103: hrtimer_start: hrtimer= .... was_armed=0
<comm>-.. [032] d.h1. 6.200558: hrtimer_start: hrtimer= .... was_armed=1
Fixes: c6a2a1770245 ("hrtimer: Add tracepoint for hrtimers")
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/trace/events/timer.h | 11 +++++++----
kernel/time/hrtimer.c | 43 ++++++++++++++++++++-----------------------
2 files changed, 27 insertions(+), 27 deletions(-)
--- a/include/trace/events/timer.h
+++ b/include/trace/events/timer.h
@@ -218,12 +218,13 @@ TRACE_EVENT(hrtimer_setup,
* hrtimer_start - called when the hrtimer is started
* @hrtimer: pointer to struct hrtimer
* @mode: the hrtimers mode
+ * @was_armed: Was armed when hrtimer_start*() was invoked
*/
TRACE_EVENT(hrtimer_start,
- TP_PROTO(struct hrtimer *hrtimer, enum hrtimer_mode mode),
+ TP_PROTO(struct hrtimer *hrtimer, enum hrtimer_mode mode, bool was_armed),
- TP_ARGS(hrtimer, mode),
+ TP_ARGS(hrtimer, mode, was_armed),
TP_STRUCT__entry(
__field( void *, hrtimer )
@@ -231,6 +232,7 @@ TRACE_EVENT(hrtimer_start,
__field( s64, expires )
__field( s64, softexpires )
__field( enum hrtimer_mode, mode )
+ __field( bool, was_armed )
),
TP_fast_assign(
@@ -239,13 +241,14 @@ TRACE_EVENT(hrtimer_start,
__entry->expires = hrtimer_get_expires(hrtimer);
__entry->softexpires = hrtimer_get_softexpires(hrtimer);
__entry->mode = mode;
+ __entry->was_armed = was_armed;
),
TP_printk("hrtimer=%p function=%ps expires=%llu softexpires=%llu "
- "mode=%s", __entry->hrtimer, __entry->function,
+ "mode=%s was_armed=%d", __entry->hrtimer, __entry->function,
(unsigned long long) __entry->expires,
(unsigned long long) __entry->softexpires,
- decode_hrtimer_mode(__entry->mode))
+ decode_hrtimer_mode(__entry->mode), __entry->was_armed)
);
/**
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -529,17 +529,10 @@ static inline void debug_setup_on_stack(
trace_hrtimer_setup(timer, clockid, mode);
}
-static inline void debug_activate(struct hrtimer *timer,
- enum hrtimer_mode mode)
+static inline void debug_activate(struct hrtimer *timer, enum hrtimer_mode mode, bool was_armed)
{
debug_hrtimer_activate(timer, mode);
- trace_hrtimer_start(timer, mode);
-}
-
-static inline void debug_deactivate(struct hrtimer *timer)
-{
- debug_hrtimer_deactivate(timer);
- trace_hrtimer_cancel(timer);
+ trace_hrtimer_start(timer, mode, was_armed);
}
static struct hrtimer_clock_base *
@@ -1137,9 +1130,9 @@ EXPORT_SYMBOL_GPL(hrtimer_forward);
* Returns true when the new timer is the leftmost timer in the tree.
*/
static bool enqueue_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
- enum hrtimer_mode mode)
+ enum hrtimer_mode mode, bool was_armed)
{
- debug_activate(timer, mode);
+ debug_activate(timer, mode, was_armed);
WARN_ON_ONCE(!base->cpu_base->online);
base->cpu_base->active_bases |= 1 << base->index;
@@ -1199,6 +1192,8 @@ remove_hrtimer(struct hrtimer *timer, st
if (state & HRTIMER_STATE_ENQUEUED) {
bool reprogram;
+ debug_hrtimer_deactivate(timer);
+
/*
* Remove the timer and force reprogramming when high
* resolution mode is active and the timer is on the current
@@ -1207,7 +1202,6 @@ remove_hrtimer(struct hrtimer *timer, st
* reprogramming happens in the interrupt handler. This is a
* rare case and less expensive than a smp call.
*/
- debug_deactivate(timer);
reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
/*
@@ -1274,15 +1268,15 @@ static int __hrtimer_start_range_ns(stru
{
struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases);
struct hrtimer_clock_base *new_base;
- bool force_local, first;
+ bool force_local, first, was_armed;
/*
* If the timer is on the local cpu base and is the first expiring
* timer then this might end up reprogramming the hardware twice
- * (on removal and on enqueue). To avoid that by prevent the
- * reprogram on removal, keep the timer local to the current CPU
- * and enforce reprogramming after it is queued no matter whether
- * it is the new first expiring timer again or not.
+ * (on removal and on enqueue). To avoid that prevent the reprogram
+ * on removal, keep the timer local to the current CPU and enforce
+ * reprogramming after it is queued no matter whether it is the new
+ * first expiring timer again or not.
*/
force_local = base->cpu_base == this_cpu_base;
force_local &= base->cpu_base->next_timer == timer;
@@ -1304,7 +1298,7 @@ static int __hrtimer_start_range_ns(stru
* avoids programming the underlying clock event twice (once at
* removal and once after enqueue).
*/
- remove_hrtimer(timer, base, true, force_local);
+ was_armed = remove_hrtimer(timer, base, true, force_local);
if (mode & HRTIMER_MODE_REL)
tim = ktime_add_safe(tim, __hrtimer_cb_get_time(base->clockid));
@@ -1321,7 +1315,7 @@ static int __hrtimer_start_range_ns(stru
new_base = base;
}
- first = enqueue_hrtimer(timer, new_base, mode);
+ first = enqueue_hrtimer(timer, new_base, mode, was_armed);
/*
* If the hrtimer interrupt is running, then it will reevaluate the
@@ -1439,8 +1433,11 @@ int hrtimer_try_to_cancel(struct hrtimer
base = lock_hrtimer_base(timer, &flags);
- if (!hrtimer_callback_running(timer))
+ if (!hrtimer_callback_running(timer)) {
ret = remove_hrtimer(timer, base, false, false);
+ if (ret)
+ trace_hrtimer_cancel(timer);
+ }
unlock_hrtimer_base(timer, &flags);
@@ -1877,7 +1874,7 @@ static void __run_hrtimer(struct hrtimer
*/
if (restart != HRTIMER_NORESTART &&
!(timer->state & HRTIMER_STATE_ENQUEUED))
- enqueue_hrtimer(timer, base, HRTIMER_MODE_ABS);
+ enqueue_hrtimer(timer, base, HRTIMER_MODE_ABS, false);
/*
* Separate the ->running assignment from the ->state assignment.
@@ -2356,7 +2353,7 @@ static void migrate_hrtimer_list(struct
while ((node = timerqueue_getnext(&old_base->active))) {
timer = container_of(node, struct hrtimer, node);
BUG_ON(hrtimer_callback_running(timer));
- debug_deactivate(timer);
+ debug_hrtimer_deactivate(timer);
/*
* Mark it as ENQUEUED not INACTIVE otherwise the
@@ -2373,7 +2370,7 @@ static void migrate_hrtimer_list(struct
* sort out already expired timers and reprogram the
* event device.
*/
- enqueue_hrtimer(timer, new_base, HRTIMER_MODE_ABS);
+ enqueue_hrtimer(timer, new_base, HRTIMER_MODE_ABS, true);
}
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:36:59 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Simplify and tidy up the code where possible.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 48 +++++++++++++++---------------------------------
1 file changed, 15 insertions(+), 33 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -838,13 +838,12 @@ static void retrigger_next_event(void *a
* In periodic low resolution mode, the next softirq expiration
* must also be updated.
*/
- raw_spin_lock(&base->lock);
+ guard(raw_spinlock)(&base->lock);
hrtimer_update_base(base);
if (hrtimer_hres_active(base))
hrtimer_force_reprogram(base, 0);
else
hrtimer_update_next_event(base);
- raw_spin_unlock(&base->lock);
}
/*
@@ -994,7 +993,6 @@ static bool update_needs_ipi(struct hrti
void clock_was_set(unsigned int bases)
{
cpumask_var_t mask;
- int cpu;
if (!hrtimer_highres_enabled() && !tick_nohz_is_active())
goto out_timerfd;
@@ -1005,24 +1003,19 @@ void clock_was_set(unsigned int bases)
}
/* Avoid interrupting CPUs if possible */
- cpus_read_lock();
- for_each_online_cpu(cpu) {
- struct hrtimer_cpu_base *cpu_base;
- unsigned long flags;
+ scoped_guard(cpus_read_lock) {
+ int cpu;
- cpu_base = &per_cpu(hrtimer_bases, cpu);
- raw_spin_lock_irqsave(&cpu_base->lock, flags);
+ for_each_online_cpu(cpu) {
+ struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu);
- if (update_needs_ipi(cpu_base, bases))
- cpumask_set_cpu(cpu, mask);
-
- raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
+ guard(raw_spinlock_irqsave)(&cpu_base->lock);
+ if (update_needs_ipi(cpu_base, bases))
+ cpumask_set_cpu(cpu, mask);
+ }
+ scoped_guard(preempt)
+ smp_call_function_many(mask, retrigger_next_event, NULL, 1);
}
-
- preempt_disable();
- smp_call_function_many(mask, retrigger_next_event, NULL, 1);
- preempt_enable();
- cpus_read_unlock();
free_cpumask_var(mask);
out_timerfd:
@@ -1600,15 +1593,11 @@ u64 hrtimer_get_next_event(void)
{
struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
u64 expires = KTIME_MAX;
- unsigned long flags;
-
- raw_spin_lock_irqsave(&cpu_base->lock, flags);
+ guard(raw_spinlock_irqsave)(&cpu_base->lock);
if (!hrtimer_hres_active(cpu_base))
expires = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
- raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
-
return expires;
}
@@ -1623,25 +1612,18 @@ u64 hrtimer_next_event_without(const str
{
struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
u64 expires = KTIME_MAX;
- unsigned long flags;
-
- raw_spin_lock_irqsave(&cpu_base->lock, flags);
+ guard(raw_spinlock_irqsave)(&cpu_base->lock);
if (hrtimer_hres_active(cpu_base)) {
unsigned int active;
if (!cpu_base->softirq_activated) {
active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
- expires = __hrtimer_next_event_base(cpu_base, exclude,
- active, KTIME_MAX);
+ expires = __hrtimer_next_event_base(cpu_base, exclude, active, KTIME_MAX);
}
active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
- expires = __hrtimer_next_event_base(cpu_base, exclude, active,
- expires);
+ expires = __hrtimer_next_event_base(cpu_base, exclude, active, expires);
}
-
- raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
-
return expires;
}
#endif | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:04 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | No point in accessing the timer twice.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -810,10 +810,11 @@ static void hrtimer_reprogram(struct hrt
{
struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
struct hrtimer_clock_base *base = timer->base;
- ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
+ ktime_t expires = hrtimer_get_expires(timer);
- WARN_ON_ONCE(hrtimer_get_expires(timer) < 0);
+ WARN_ON_ONCE(expires < 0);
+ expires = ktime_sub(expires, base->offset);
/*
* CLOCK_REALTIME timer might be requested with an absolute
* expiry time which is less than base->offset. Set it to 0. | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:14 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | As this code has some major surgery ahead, clean up coding style and bring
comments up to date.
No functional change intended.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 364 +++++++++++++++++++-------------------------------
1 file changed, 143 insertions(+), 221 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -77,43 +77,22 @@ static ktime_t __hrtimer_cb_get_time(clo
* to reach a base using a clockid, hrtimer_clockid_to_base()
* is used to convert from clockid to the proper hrtimer_base_type.
*/
+
+#define BASE_INIT(idx, cid) \
+ [idx] = { .index = idx, .clockid = cid }
+
DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
{
.lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock),
- .clock_base =
- {
- {
- .index = HRTIMER_BASE_MONOTONIC,
- .clockid = CLOCK_MONOTONIC,
- },
- {
- .index = HRTIMER_BASE_REALTIME,
- .clockid = CLOCK_REALTIME,
- },
- {
- .index = HRTIMER_BASE_BOOTTIME,
- .clockid = CLOCK_BOOTTIME,
- },
- {
- .index = HRTIMER_BASE_TAI,
- .clockid = CLOCK_TAI,
- },
- {
- .index = HRTIMER_BASE_MONOTONIC_SOFT,
- .clockid = CLOCK_MONOTONIC,
- },
- {
- .index = HRTIMER_BASE_REALTIME_SOFT,
- .clockid = CLOCK_REALTIME,
- },
- {
- .index = HRTIMER_BASE_BOOTTIME_SOFT,
- .clockid = CLOCK_BOOTTIME,
- },
- {
- .index = HRTIMER_BASE_TAI_SOFT,
- .clockid = CLOCK_TAI,
- },
+ .clock_base = {
+ BASE_INIT(HRTIMER_BASE_MONOTONIC, CLOCK_MONOTONIC),
+ BASE_INIT(HRTIMER_BASE_REALTIME, CLOCK_REALTIME),
+ BASE_INIT(HRTIMER_BASE_BOOTTIME, CLOCK_BOOTTIME),
+ BASE_INIT(HRTIMER_BASE_TAI, CLOCK_TAI),
+ BASE_INIT(HRTIMER_BASE_MONOTONIC_SOFT, CLOCK_MONOTONIC),
+ BASE_INIT(HRTIMER_BASE_REALTIME_SOFT, CLOCK_REALTIME),
+ BASE_INIT(HRTIMER_BASE_BOOTTIME_SOFT, CLOCK_BOOTTIME),
+ BASE_INIT(HRTIMER_BASE_TAI_SOFT, CLOCK_TAI),
},
.csd = CSD_INIT(retrigger_next_event, NULL)
};
@@ -150,18 +129,19 @@ static inline void hrtimer_schedule_hres
* single place
*/
#ifdef CONFIG_SMP
-
/*
* We require the migration_base for lock_hrtimer_base()/switch_hrtimer_base()
* such that hrtimer_callback_running() can unconditionally dereference
* timer->base->cpu_base
*/
static struct hrtimer_cpu_base migration_cpu_base = {
- .clock_base = { {
- .cpu_base = &migration_cpu_base,
- .seq = SEQCNT_RAW_SPINLOCK_ZERO(migration_cpu_base.seq,
- &migration_cpu_base.lock),
- }, },
+ .clock_base = {
+ [0] = {
+ .cpu_base = &migration_cpu_base,
+ .seq = SEQCNT_RAW_SPINLOCK_ZERO(migration_cpu_base.seq,
+ &migration_cpu_base.lock),
+ },
+ },
};
#define migration_base migration_cpu_base.clock_base[0]
@@ -178,15 +158,13 @@ static struct hrtimer_cpu_base migration
* possible to set timer->base = &migration_base and drop the lock: the timer
* remains locked.
*/
-static
-struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer,
- unsigned long *flags)
+static struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer,
+ unsigned long *flags)
__acquires(&timer->base->lock)
{
- struct hrtimer_clock_base *base;
-
for (;;) {
- base = READ_ONCE(timer->base);
+ struct hrtimer_clock_base *base = READ_ONCE(timer->base);
+
if (likely(base != &migration_base)) {
raw_spin_lock_irqsave(&base->cpu_base->lock, *flags);
if (likely(base == timer->base))
@@ -239,7 +217,7 @@ static bool hrtimer_suitable_target(stru
return expires >= new_base->cpu_base->expires_next;
}
-static inline struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base, int pinned)
+static inline struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base, bool pinned)
{
if (!hrtimer_base_is_online(base)) {
int cpu = cpumask_any_and(cpu_online_mask, housekeeping_cpumask(HK_TYPE_TIMER));
@@ -267,8 +245,7 @@ static inline struct hrtimer_cpu_base *g
* the timer callback is currently running.
*/
static inline struct hrtimer_clock_base *
-switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
- int pinned)
+switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base, bool pinned)
{
struct hrtimer_cpu_base *new_cpu_base, *this_cpu_base;
struct hrtimer_clock_base *new_base;
@@ -281,13 +258,12 @@ switch_hrtimer_base(struct hrtimer *time
if (base != new_base) {
/*
- * We are trying to move timer to new_base.
- * However we can't change timer's base while it is running,
- * so we keep it on the same CPU. No hassle vs. reprogramming
- * the event source in the high resolution case. The softirq
- * code will take care of this when the timer function has
- * completed. There is no conflict as we hold the lock until
- * the timer is enqueued.
+ * We are trying to move timer to new_base. However we can't
+ * change timer's base while it is running, so we keep it on
+ * the same CPU. No hassle vs. reprogramming the event source
+ * in the high resolution case. The remote CPU will take care
+ * of this when the timer function has completed. There is no
+ * conflict as we hold the lock until the timer is enqueued.
*/
if (unlikely(hrtimer_callback_running(timer)))
return base;
@@ -297,8 +273,7 @@ switch_hrtimer_base(struct hrtimer *time
raw_spin_unlock(&base->cpu_base->lock);
raw_spin_lock(&new_base->cpu_base->lock);
- if (!hrtimer_suitable_target(timer, new_base, new_cpu_base,
- this_cpu_base)) {
+ if (!hrtimer_suitable_target(timer, new_base, new_cpu_base, this_cpu_base)) {
raw_spin_unlock(&new_base->cpu_base->lock);
raw_spin_lock(&base->cpu_base->lock);
new_cpu_base = this_cpu_base;
@@ -317,14 +292,13 @@ switch_hrtimer_base(struct hrtimer *time
#else /* CONFIG_SMP */
-static inline struct hrtimer_clock_base *
-lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags)
+static inline struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer,
+ unsigned long *flags)
__acquires(&timer->base->cpu_base->lock)
{
struct hrtimer_clock_base *base = timer->base;
raw_spin_lock_irqsave(&base->cpu_base->lock, *flags);
-
return base;
}
@@ -484,8 +458,7 @@ static inline void debug_hrtimer_init_on
debug_object_init_on_stack(timer, &hrtimer_debug_descr);
}
-static inline void debug_hrtimer_activate(struct hrtimer *timer,
- enum hrtimer_mode mode)
+static inline void debug_hrtimer_activate(struct hrtimer *timer, enum hrtimer_mode mode)
{
debug_object_activate(timer, &hrtimer_debug_descr);
}
@@ -510,8 +483,7 @@ EXPORT_SYMBOL_GPL(destroy_hrtimer_on_sta
static inline void debug_hrtimer_init(struct hrtimer *timer) { }
static inline void debug_hrtimer_init_on_stack(struct hrtimer *timer) { }
-static inline void debug_hrtimer_activate(struct hrtimer *timer,
- enum hrtimer_mode mode) { }
+static inline void debug_hrtimer_activate(struct hrtimer *timer, enum hrtimer_mode mode) { }
static inline void debug_hrtimer_deactivate(struct hrtimer *timer) { }
static inline void debug_hrtimer_assert_init(struct hrtimer *timer) { }
#endif
@@ -549,13 +521,12 @@ static struct hrtimer_clock_base *
return &cpu_base->clock_base[idx];
}
-#define for_each_active_base(base, cpu_base, active) \
+#define for_each_active_base(base, cpu_base, active) \
while ((base = __next_base((cpu_base), &(active))))
static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
const struct hrtimer *exclude,
- unsigned int active,
- ktime_t expires_next)
+ unsigned int active, ktime_t expires_next)
{
struct hrtimer_clock_base *base;
ktime_t expires;
@@ -618,29 +589,24 @@ static ktime_t __hrtimer_next_event_base
* - HRTIMER_ACTIVE_SOFT, or
* - HRTIMER_ACTIVE_HARD.
*/
-static ktime_t
-__hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base, unsigned int active_mask)
+static ktime_t __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base, unsigned int active_mask)
{
- unsigned int active;
struct hrtimer *next_timer = NULL;
ktime_t expires_next = KTIME_MAX;
+ unsigned int active;
if (!cpu_base->softirq_activated && (active_mask & HRTIMER_ACTIVE_SOFT)) {
active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
cpu_base->softirq_next_timer = NULL;
- expires_next = __hrtimer_next_event_base(cpu_base, NULL,
- active, KTIME_MAX);
-
+ expires_next = __hrtimer_next_event_base(cpu_base, NULL, active, KTIME_MAX);
next_timer = cpu_base->softirq_next_timer;
}
if (active_mask & HRTIMER_ACTIVE_HARD) {
active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
cpu_base->next_timer = next_timer;
- expires_next = __hrtimer_next_event_base(cpu_base, NULL, active,
- expires_next);
+ expires_next = __hrtimer_next_event_base(cpu_base, NULL, active, expires_next);
}
-
return expires_next;
}
@@ -681,8 +647,8 @@ static inline ktime_t hrtimer_update_bas
ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset;
ktime_t *offs_tai = &base->clock_base[HRTIMER_BASE_TAI].offset;
- ktime_t now = ktime_get_update_offsets_now(&base->clock_was_set_seq,
- offs_real, offs_boot, offs_tai);
+ ktime_t now = ktime_get_update_offsets_now(&base->clock_was_set_seq, offs_real,
+ offs_boot, offs_tai);
base->clock_base[HRTIMER_BASE_REALTIME_SOFT].offset = *offs_real;
base->clock_base[HRTIMER_BASE_BOOTTIME_SOFT].offset = *offs_boot;
@@ -702,8 +668,7 @@ static inline int hrtimer_hres_active(st
cpu_base->hres_active : 0;
}
-static void __hrtimer_reprogram(struct hrtimer_cpu_base *cpu_base,
- struct hrtimer *next_timer,
+static void __hrtimer_reprogram(struct hrtimer_cpu_base *cpu_base, struct hrtimer *next_timer,
ktime_t expires_next)
{
cpu_base->expires_next = expires_next;
@@ -736,12 +701,9 @@ static void __hrtimer_reprogram(struct h
* next event
* Called with interrupts disabled and base->lock held
*/
-static void
-hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal)
+static void hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, bool skip_equal)
{
- ktime_t expires_next;
-
- expires_next = hrtimer_update_next_event(cpu_base);
+ ktime_t expires_next = hrtimer_update_next_event(cpu_base);
if (skip_equal && expires_next == cpu_base->expires_next)
return;
@@ -752,41 +714,31 @@ hrtimer_force_reprogram(struct hrtimer_c
/* High resolution timer related functions */
#ifdef CONFIG_HIGH_RES_TIMERS
-/*
- * High resolution timer enabled ?
- */
+/* High resolution timer enabled ? */
static bool hrtimer_hres_enabled __read_mostly = true;
unsigned int hrtimer_resolution __read_mostly = LOW_RES_NSEC;
EXPORT_SYMBOL_GPL(hrtimer_resolution);
-/*
- * Enable / Disable high resolution mode
- */
+/* Enable / Disable high resolution mode */
static int __init setup_hrtimer_hres(char *str)
{
return (kstrtobool(str, &hrtimer_hres_enabled) == 0);
}
-
__setup("highres=", setup_hrtimer_hres);
-/*
- * hrtimer_high_res_enabled - query, if the highres mode is enabled
- */
-static inline int hrtimer_is_hres_enabled(void)
+/* hrtimer_high_res_enabled - query, if the highres mode is enabled */
+static inline bool hrtimer_is_hres_enabled(void)
{
return hrtimer_hres_enabled;
}
-/*
- * Switch to high resolution mode
- */
+/* Switch to high resolution mode */
static void hrtimer_switch_to_hres(void)
{
struct hrtimer_cpu_base *base = this_cpu_ptr(&hrtimer_bases);
if (tick_init_highres()) {
- pr_warn("Could not switch to high resolution mode on CPU %u\n",
- base->cpu);
+ pr_warn("Could not switch to high resolution mode on CPU %u\n", base->cpu);
return;
}
base->hres_active = 1;
@@ -800,10 +752,11 @@ static void hrtimer_switch_to_hres(void)
#else
-static inline int hrtimer_is_hres_enabled(void) { return 0; }
+static inline bool hrtimer_is_hres_enabled(void) { return 0; }
static inline void hrtimer_switch_to_hres(void) { }
#endif /* CONFIG_HIGH_RES_TIMERS */
+
/*
* Retrigger next event is called after clock was set with interrupts
* disabled through an SMP function call or directly from low level
@@ -841,7 +794,7 @@ static void retrigger_next_event(void *a
guard(raw_spinlock)(&base->lock);
hrtimer_update_base(base);
if (hrtimer_hres_active(base))
- hrtimer_force_reprogram(base, 0);
+ hrtimer_force_reprogram(base, /* skip_equal */ false);
else
hrtimer_update_next_event(base);
}
@@ -887,8 +840,7 @@ static void hrtimer_reprogram(struct hrt
timer_cpu_base->softirq_next_timer = timer;
timer_cpu_base->softirq_expires_next = expires;
- if (!ktime_before(expires, timer_cpu_base->expires_next) ||
- !reprogram)
+ if (!ktime_before(expires, timer_cpu_base->expires_next) || !reprogram)
return;
}
@@ -914,8 +866,7 @@ static void hrtimer_reprogram(struct hrt
__hrtimer_reprogram(cpu_base, timer, expires);
}
-static bool update_needs_ipi(struct hrtimer_cpu_base *cpu_base,
- unsigned int active)
+static bool update_needs_ipi(struct hrtimer_cpu_base *cpu_base, unsigned int active)
{
struct hrtimer_clock_base *base;
unsigned int seq;
@@ -1050,11 +1001,8 @@ void hrtimers_resume_local(void)
retrigger_next_event(NULL);
}
-/*
- * Counterpart to lock_hrtimer_base above:
- */
-static inline
-void unlock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags)
+/* Counterpart to lock_hrtimer_base above */
+static inline void unlock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags)
__releases(&timer->base->cpu_base->lock)
{
raw_spin_unlock_irqrestore(&timer->base->cpu_base->lock, *flags);
@@ -1071,7 +1019,7 @@ void unlock_hrtimer_base(const struct hr
* .. note::
* This only updates the timer expiry value and does not requeue the timer.
*
- * There is also a variant of the function hrtimer_forward_now().
+ * There is also a variant of this function: hrtimer_forward_now().
*
* Context: Can be safely called from the callback function of @timer. If called
* from other contexts @timer must neither be enqueued nor running the
@@ -1081,8 +1029,8 @@ void unlock_hrtimer_base(const struct hr
*/
u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval)
{
- u64 orun = 1;
ktime_t delta;
+ u64 orun = 1;
delta = ktime_sub(now, hrtimer_get_expires(timer));
@@ -1118,13 +1066,15 @@ EXPORT_SYMBOL_GPL(hrtimer_forward);
* enqueue_hrtimer - internal function to (re)start a timer
*
* The timer is inserted in expiry order. Insertion into the
- * red black tree is O(log(n)). Must hold the base lock.
+ * red black tree is O(log(n)).
*
* Returns true when the new timer is the leftmost timer in the tree.
*/
static bool enqueue_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
enum hrtimer_mode mode, bool was_armed)
{
+ lockdep_assert_held(&base->cpu_base->lock);
+
debug_activate(timer, mode, was_armed);
WARN_ON_ONCE(!base->cpu_base->online);
@@ -1139,20 +1089,19 @@ static bool enqueue_hrtimer(struct hrtim
/*
* __remove_hrtimer - internal function to remove a timer
*
- * Caller must hold the base lock.
- *
* High resolution timer mode reprograms the clock event device when the
* timer is the one which expires next. The caller can disable this by setting
* reprogram to zero. This is useful, when the context does a reprogramming
* anyway (e.g. timer interrupt)
*/
-static void __remove_hrtimer(struct hrtimer *timer,
- struct hrtimer_clock_base *base,
- u8 newstate, int reprogram)
+static void __remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ u8 newstate, bool reprogram)
{
struct hrtimer_cpu_base *cpu_base = base->cpu_base;
u8 state = timer->state;
+ lockdep_assert_held(&cpu_base->lock);
+
/* Pairs with the lockless read in hrtimer_is_queued() */
WRITE_ONCE(timer->state, newstate);
if (!(state & HRTIMER_STATE_ENQUEUED))
@@ -1162,26 +1111,25 @@ static void __remove_hrtimer(struct hrti
cpu_base->active_bases &= ~(1 << base->index);
/*
- * Note: If reprogram is false we do not update
- * cpu_base->next_timer. This happens when we remove the first
- * timer on a remote cpu. No harm as we never dereference
- * cpu_base->next_timer. So the worst thing what can happen is
- * an superfluous call to hrtimer_force_reprogram() on the
- * remote cpu later on if the same timer gets enqueued again.
+ * If reprogram is false don't update cpu_base->next_timer and do not
+ * touch the clock event device.
+ *
+ * This happens when removing the first timer on a remote CPU, which
+ * will be handled by the remote CPU's interrupt. It also happens when
+ * a local timer is removed to be immediately restarted. That's handled
+ * at the call site.
*/
if (reprogram && timer == cpu_base->next_timer && !timer->is_lazy)
- hrtimer_force_reprogram(cpu_base, 1);
+ hrtimer_force_reprogram(cpu_base, /* skip_equal */ true);
}
-/*
- * remove hrtimer, called with base lock held
- */
-static inline int
-remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
- bool restart, bool keep_local)
+static inline bool remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ bool restart, bool keep_local)
{
u8 state = timer->state;
+ lockdep_assert_held(&base->cpu_base->lock);
+
if (state & HRTIMER_STATE_ENQUEUED) {
bool reprogram;
@@ -1209,9 +1157,9 @@ remove_hrtimer(struct hrtimer *timer, st
reprogram &= !keep_local;
__remove_hrtimer(timer, base, state, reprogram);
- return 1;
+ return true;
}
- return 0;
+ return false;
}
static inline ktime_t hrtimer_update_lowres(struct hrtimer *timer, ktime_t tim,
@@ -1230,34 +1178,27 @@ static inline ktime_t hrtimer_update_low
return tim;
}
-static void
-hrtimer_update_softirq_timer(struct hrtimer_cpu_base *cpu_base, bool reprogram)
+static void hrtimer_update_softirq_timer(struct hrtimer_cpu_base *cpu_base, bool reprogram)
{
- ktime_t expires;
+ ktime_t expires = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_SOFT);
/*
- * Find the next SOFT expiration.
- */
- expires = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_SOFT);
-
- /*
- * reprogramming needs to be triggered, even if the next soft
- * hrtimer expires at the same time than the next hard
+ * Reprogramming needs to be triggered, even if the next soft
+ * hrtimer expires at the same time as the next hard
* hrtimer. cpu_base->softirq_expires_next needs to be updated!
*/
if (expires == KTIME_MAX)
return;
/*
- * cpu_base->*next_timer is recomputed by __hrtimer_get_next_event()
- * cpu_base->*expires_next is only set by hrtimer_reprogram()
+ * cpu_base->next_timer is recomputed by __hrtimer_get_next_event()
+ * cpu_base->expires_next is only set by hrtimer_reprogram()
*/
hrtimer_reprogram(cpu_base->softirq_next_timer, reprogram);
}
-static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
- u64 delta_ns, const enum hrtimer_mode mode,
- struct hrtimer_clock_base *base)
+static bool __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, u64 delta_ns,
+ const enum hrtimer_mode mode, struct hrtimer_clock_base *base)
{
struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases);
struct hrtimer_clock_base *new_base;
@@ -1301,12 +1242,10 @@ static int __hrtimer_start_range_ns(stru
hrtimer_set_expires_range_ns(timer, tim, delta_ns);
/* Switch the timer base, if necessary: */
- if (!force_local) {
- new_base = switch_hrtimer_base(timer, base,
- mode & HRTIMER_MODE_PINNED);
- } else {
+ if (!force_local)
+ new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
+ else
new_base = base;
- }
first = enqueue_hrtimer(timer, new_base, mode, was_armed);
@@ -1319,9 +1258,12 @@ static int __hrtimer_start_range_ns(stru
if (!force_local) {
/*
- * If the current CPU base is online, then the timer is
- * never queued on a remote CPU if it would be the first
- * expiring timer there.
+ * If the current CPU base is online, then the timer is never
+ * queued on a remote CPU if it would be the first expiring
+ * timer there unless the timer callback is currently executed
+ * on the remote CPU. In the latter case the remote CPU will
+ * re-evaluate the first expiring timer after completing the
+ * callbacks.
*/
if (hrtimer_base_is_online(this_cpu_base))
return first;
@@ -1336,7 +1278,7 @@ static int __hrtimer_start_range_ns(stru
smp_call_function_single_async(new_cpu_base->cpu, &new_cpu_base->csd);
}
- return 0;
+ return false;
}
/*
@@ -1350,7 +1292,7 @@ static int __hrtimer_start_range_ns(stru
*/
if (timer->is_lazy) {
if (new_base->cpu_base->expires_next <= hrtimer_get_expires(timer))
- return 0;
+ return false;
}
/*
@@ -1358,8 +1300,8 @@ static int __hrtimer_start_range_ns(stru
* reprogramming on removal and enqueue. Force reprogram the
* hardware by evaluating the new first expiring timer.
*/
- hrtimer_force_reprogram(new_base->cpu_base, 1);
- return 0;
+ hrtimer_force_reprogram(new_base->cpu_base, /* skip_equal */ true);
+ return false;
}
/**
@@ -1371,8 +1313,8 @@ static int __hrtimer_start_range_ns(stru
* relative (HRTIMER_MODE_REL), and pinned (HRTIMER_MODE_PINNED);
* softirq based mode is considered for debug purpose only!
*/
-void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
- u64 delta_ns, const enum hrtimer_mode mode)
+void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, u64 delta_ns,
+ const enum hrtimer_mode mode)
{
struct hrtimer_clock_base *base;
unsigned long flags;
@@ -1464,8 +1406,7 @@ static void hrtimer_cpu_base_unlock_expi
* the timer callback to finish. Drop expiry_lock and reacquire it. That
* allows the waiter to acquire the lock and make progress.
*/
-static void hrtimer_sync_wait_running(struct hrtimer_cpu_base *cpu_base,
- unsigned long flags)
+static void hrtimer_sync_wait_running(struct hrtimer_cpu_base *cpu_base, unsigned long flags)
{
if (atomic_read(&cpu_base->timer_waiters)) {
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
@@ -1530,14 +1471,10 @@ void hrtimer_cancel_wait_running(const s
spin_unlock_bh(&base->cpu_base->softirq_expiry_lock);
}
#else
-static inline void
-hrtimer_cpu_base_init_expiry_lock(struct hrtimer_cpu_base *base) { }
-static inline void
-hrtimer_cpu_base_lock_expiry(struct hrtimer_cpu_base *base) { }
-static inline void
-hrtimer_cpu_base_unlock_expiry(struct hrtimer_cpu_base *base) { }
-static inline void hrtimer_sync_wait_running(struct hrtimer_cpu_base *base,
- unsigned long flags) { }
+static inline void hrtimer_cpu_base_init_expiry_lock(struct hrtimer_cpu_base *base) { }
+static inline void hrtimer_cpu_base_lock_expiry(struct hrtimer_cpu_base *base) { }
+static inline void hrtimer_cpu_base_unlock_expiry(struct hrtimer_cpu_base *base) { }
+static inline void hrtimer_sync_wait_running(struct hrtimer_cpu_base *base, unsigned long fl) { }
#endif
/**
@@ -1668,8 +1605,7 @@ ktime_t hrtimer_cb_get_time(const struct
}
EXPORT_SYMBOL_GPL(hrtimer_cb_get_time);
-static void __hrtimer_setup(struct hrtimer *timer,
- enum hrtimer_restart (*function)(struct hrtimer *),
+static void __hrtimer_setup(struct hrtimer *timer, enum hrtimer_restart (*fn)(struct hrtimer *),
clockid_t clock_id, enum hrtimer_mode mode)
{
bool softtimer = !!(mode & HRTIMER_MODE_SOFT);
@@ -1705,10 +1641,10 @@ static void __hrtimer_setup(struct hrtim
timer->base = &cpu_base->clock_base[base];
timerqueue_init(&timer->node);
- if (WARN_ON_ONCE(!function))
+ if (WARN_ON_ONCE(!fn))
ACCESS_PRIVATE(timer, function) = hrtimer_dummy_timeout;
else
- ACCESS_PRIVATE(timer, function) = function;
+ ACCESS_PRIVATE(timer, function) = fn;
}
/**
@@ -1767,12 +1703,10 @@ bool hrtimer_active(const struct hrtimer
base = READ_ONCE(timer->base);
seq = raw_read_seqcount_begin(&base->seq);
- if (timer->state != HRTIMER_STATE_INACTIVE ||
- base->running == timer)
+ if (timer->state != HRTIMER_STATE_INACTIVE || base->running == timer)
return true;
- } while (read_seqcount_retry(&base->seq, seq) ||
- base != READ_ONCE(timer->base));
+ } while (read_seqcount_retry(&base->seq, seq) || base != READ_ONCE(timer->base));
return false;
}
@@ -1795,11 +1729,9 @@ EXPORT_SYMBOL_GPL(hrtimer_active);
* a false negative if the read side got smeared over multiple consecutive
* __run_hrtimer() invocations.
*/
-
-static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base,
- struct hrtimer_clock_base *base,
- struct hrtimer *timer, ktime_t *now,
- unsigned long flags) __must_hold(&cpu_base->lock)
+static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base, struct hrtimer_clock_base *base,
+ struct hrtimer *timer, ktime_t *now, unsigned long flags)
+ __must_hold(&cpu_base->lock)
{
enum hrtimer_restart (*fn)(struct hrtimer *);
bool expires_in_hardirq;
@@ -1819,7 +1751,7 @@ static void __run_hrtimer(struct hrtimer
*/
raw_write_seqcount_barrier(&base->seq);
- __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0);
+ __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, false);
fn = ACCESS_PRIVATE(timer, function);
/*
@@ -1854,8 +1786,7 @@ static void __run_hrtimer(struct hrtimer
* hrtimer_start_range_ns() can have popped in and enqueued the timer
* for us already.
*/
- if (restart != HRTIMER_NORESTART &&
- !(timer->state & HRTIMER_STATE_ENQUEUED))
+ if (restart != HRTIMER_NORESTART && !(timer->state & HRTIMER_STATE_ENQUEUED))
enqueue_hrtimer(timer, base, HRTIMER_MODE_ABS, false);
/*
@@ -1874,8 +1805,8 @@ static void __run_hrtimer(struct hrtimer
static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now,
unsigned long flags, unsigned int active_mask)
{
- struct hrtimer_clock_base *base;
unsigned int active = cpu_base->active_bases & active_mask;
+ struct hrtimer_clock_base *base;
for_each_active_base(base, cpu_base, active) {
struct timerqueue_node *node;
@@ -1951,11 +1882,10 @@ void hrtimer_interrupt(struct clock_even
retry:
cpu_base->in_hrtirq = 1;
/*
- * We set expires_next to KTIME_MAX here with cpu_base->lock
- * held to prevent that a timer is enqueued in our queue via
- * the migration code. This does not affect enqueueing of
- * timers which run their callback and need to be requeued on
- * this CPU.
+ * Set expires_next to KTIME_MAX, which prevents that remote CPUs queue
+ * timers while __hrtimer_run_queues() is expiring the clock bases.
+ * Timers which are re/enqueued on the local CPU are not affected by
+ * this.
*/
cpu_base->expires_next = KTIME_MAX;
@@ -2069,8 +1999,7 @@ void hrtimer_run_queues(void)
*/
static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer)
{
- struct hrtimer_sleeper *t =
- container_of(timer, struct hrtimer_sleeper, timer);
+ struct hrtimer_sleeper *t = container_of(timer, struct hrtimer_sleeper, timer);
struct task_struct *task = t->task;
t->task = NULL;
@@ -2088,8 +2017,7 @@ static enum hrtimer_restart hrtimer_wake
* Wrapper around hrtimer_start_expires() for hrtimer_sleeper based timers
* to allow PREEMPT_RT to tweak the delivery mode (soft/hardirq context)
*/
-void hrtimer_sleeper_start_expires(struct hrtimer_sleeper *sl,
- enum hrtimer_mode mode)
+void hrtimer_sleeper_start_expires(struct hrtimer_sleeper *sl, enum hrtimer_mode mode)
{
/*
* Make the enqueue delivery mode check work on RT. If the sleeper
@@ -2105,8 +2033,8 @@ void hrtimer_sleeper_start_expires(struc
}
EXPORT_SYMBOL_GPL(hrtimer_sleeper_start_expires);
-static void __hrtimer_setup_sleeper(struct hrtimer_sleeper *sl,
- clockid_t clock_id, enum hrtimer_mode mode)
+static void __hrtimer_setup_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
+ enum hrtimer_mode mode)
{
/*
* On PREEMPT_RT enabled kernels hrtimers which are not explicitly
@@ -2142,8 +2070,8 @@ static void __hrtimer_setup_sleeper(stru
* @clock_id: the clock to be used
* @mode: timer mode abs/rel
*/
-void hrtimer_setup_sleeper_on_stack(struct hrtimer_sleeper *sl,
- clockid_t clock_id, enum hrtimer_mode mode)
+void hrtimer_setup_sleeper_on_stack(struct hrtimer_sleeper *sl, clockid_t clock_id,
+ enum hrtimer_mode mode)
{
debug_setup_on_stack(&sl->timer, clock_id, mode);
__hrtimer_setup_sleeper(sl, clock_id, mode);
@@ -2216,8 +2144,7 @@ static long __sched hrtimer_nanosleep_re
return ret;
}
-long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
- const clockid_t clockid)
+long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode, const clockid_t clockid)
{
struct restart_block *restart;
struct hrtimer_sleeper t;
@@ -2260,8 +2187,7 @@ SYSCALL_DEFINE2(nanosleep, struct __kern
current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
current->restart_block.nanosleep.rmtp = rmtp;
- return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
- CLOCK_MONOTONIC);
+ return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL, CLOCK_MONOTONIC);
}
#endif
@@ -2269,7 +2195,7 @@ SYSCALL_DEFINE2(nanosleep, struct __kern
#ifdef CONFIG_COMPAT_32BIT_TIME
SYSCALL_DEFINE2(nanosleep_time32, struct old_timespec32 __user *, rqtp,
- struct old_timespec32 __user *, rmtp)
+ struct old_timespec32 __user *, rmtp)
{
struct timespec64 tu;
@@ -2282,8 +2208,7 @@ SYSCALL_DEFINE2(nanosleep_time32, struct
current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
current->restart_block.nanosleep.compat_rmtp = rmtp;
- return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
- CLOCK_MONOTONIC);
+ return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL, CLOCK_MONOTONIC);
}
#endif
@@ -2293,9 +2218,8 @@ SYSCALL_DEFINE2(nanosleep_time32, struct
int hrtimers_prepare_cpu(unsigned int cpu)
{
struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu);
- int i;
- for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
+ for (int i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
struct hrtimer_clock_base *clock_b = &cpu_base->clock_base[i];
clock_b->cpu_base = cpu_base;
@@ -2329,8 +2253,8 @@ int hrtimers_cpu_starting(unsigned int c
static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
struct hrtimer_clock_base *new_base)
{
- struct hrtimer *timer;
struct timerqueue_node *node;
+ struct hrtimer *timer;
while ((node = timerqueue_getnext(&old_base->active))) {
timer = container_of(node, struct hrtimer, node);
@@ -2342,7 +2266,7 @@ static void migrate_hrtimer_list(struct
* timer could be seen as !active and just vanish away
* under us on another CPU
*/
- __remove_hrtimer(timer, old_base, HRTIMER_STATE_ENQUEUED, 0);
+ __remove_hrtimer(timer, old_base, HRTIMER_STATE_ENQUEUED, false);
timer->base = new_base;
/*
* Enqueue the timers on the new cpu. This does not
@@ -2358,7 +2282,7 @@ static void migrate_hrtimer_list(struct
int hrtimers_cpu_dying(unsigned int dying_cpu)
{
- int i, ncpu = cpumask_any_and(cpu_active_mask, housekeeping_cpumask(HK_TYPE_TIMER));
+ int ncpu = cpumask_any_and(cpu_active_mask, housekeeping_cpumask(HK_TYPE_TIMER));
struct hrtimer_cpu_base *old_base, *new_base;
old_base = this_cpu_ptr(&hrtimer_bases);
@@ -2371,10 +2295,8 @@ int hrtimers_cpu_dying(unsigned int dyin
raw_spin_lock(&old_base->lock);
raw_spin_lock_nested(&new_base->lock, SINGLE_DEPTH_NESTING);
- for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
- migrate_hrtimer_list(&old_base->clock_base[i],
- &new_base->clock_base[i]);
- }
+ for (int i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++)
+ migrate_hrtimer_list(&old_base->clock_base[i], &new_base->clock_base[i]);
/* Tell the other CPU to retrigger the next event */
smp_call_function_single(ncpu, retrigger_next_event, NULL, 0); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:09 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Use bool for the various flags as that creates better code in the hot path.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer_defs.h | 10 +++++-----
kernel/time/hrtimer.c | 25 +++++++++++++------------
2 files changed, 18 insertions(+), 17 deletions(-)
--- a/include/linux/hrtimer_defs.h
+++ b/include/linux/hrtimer_defs.h
@@ -83,11 +83,11 @@ struct hrtimer_cpu_base {
unsigned int cpu;
unsigned int active_bases;
unsigned int clock_was_set_seq;
- unsigned int hres_active : 1,
- in_hrtirq : 1,
- hang_detected : 1,
- softirq_activated : 1,
- online : 1;
+ bool hres_active;
+ bool in_hrtirq;
+ bool hang_detected;
+ bool softirq_activated;
+ bool online;
#ifdef CONFIG_HIGH_RES_TIMERS
unsigned int nr_events;
unsigned short nr_retries;
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -741,7 +741,7 @@ static void hrtimer_switch_to_hres(void)
pr_warn("Could not switch to high resolution mode on CPU %u\n", base->cpu);
return;
}
- base->hres_active = 1;
+ base->hres_active = true;
hrtimer_resolution = HIGH_RES_NSEC;
tick_setup_sched_timer(true);
@@ -1854,7 +1854,7 @@ static __latent_entropy void hrtimer_run
now = hrtimer_update_base(cpu_base);
__hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_SOFT);
- cpu_base->softirq_activated = 0;
+ cpu_base->softirq_activated = false;
hrtimer_update_softirq_timer(cpu_base, true);
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
@@ -1881,7 +1881,7 @@ void hrtimer_interrupt(struct clock_even
raw_spin_lock_irqsave(&cpu_base->lock, flags);
entry_time = now = hrtimer_update_base(cpu_base);
retry:
- cpu_base->in_hrtirq = 1;
+ cpu_base->in_hrtirq = true;
/*
* Set expires_next to KTIME_MAX, which prevents that remote CPUs queue
* timers while __hrtimer_run_queues() is expiring the clock bases.
@@ -1892,7 +1892,7 @@ void hrtimer_interrupt(struct clock_even
if (!ktime_before(now, cpu_base->softirq_expires_next)) {
cpu_base->softirq_expires_next = KTIME_MAX;
- cpu_base->softirq_activated = 1;
+ cpu_base->softirq_activated = true;
raise_timer_softirq(HRTIMER_SOFTIRQ);
}
@@ -1905,12 +1905,12 @@ void hrtimer_interrupt(struct clock_even
* against it.
*/
cpu_base->expires_next = expires_next;
- cpu_base->in_hrtirq = 0;
+ cpu_base->in_hrtirq = false;
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
/* Reprogramming necessary ? */
if (!tick_program_event(expires_next, 0)) {
- cpu_base->hang_detected = 0;
+ cpu_base->hang_detected = false;
return;
}
@@ -1939,7 +1939,7 @@ void hrtimer_interrupt(struct clock_even
* time away.
*/
cpu_base->nr_hangs++;
- cpu_base->hang_detected = 1;
+ cpu_base->hang_detected = true;
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
delta = ktime_sub(now, entry_time);
@@ -1987,7 +1987,7 @@ void hrtimer_run_queues(void)
if (!ktime_before(now, cpu_base->softirq_expires_next)) {
cpu_base->softirq_expires_next = KTIME_MAX;
- cpu_base->softirq_activated = 1;
+ cpu_base->softirq_activated = true;
raise_timer_softirq(HRTIMER_SOFTIRQ);
}
@@ -2239,13 +2239,14 @@ int hrtimers_cpu_starting(unsigned int c
/* Clear out any left over state from a CPU down operation */
cpu_base->active_bases = 0;
- cpu_base->hres_active = 0;
- cpu_base->hang_detected = 0;
+ cpu_base->hres_active = false;
+ cpu_base->hang_detected = false;
cpu_base->next_timer = NULL;
cpu_base->softirq_next_timer = NULL;
cpu_base->expires_next = KTIME_MAX;
cpu_base->softirq_expires_next = KTIME_MAX;
- cpu_base->online = 1;
+ cpu_base->softirq_activated = false;
+ cpu_base->online = true;
return 0;
}
@@ -2303,7 +2304,7 @@ int hrtimers_cpu_dying(unsigned int dyin
smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
raw_spin_unlock(&new_base->lock);
- old_base->online = 0;
+ old_base->online = false;
raw_spin_unlock(&old_base->lock);
return 0; | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:18 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | All 'u8' flags are true booleans, so make it entirely clear that these can
only contain true or false.
This is especially true for hrtimer::state, which has a historical leftover
of using the state with bitwise operations. That was used in the early
hrtimer implementation with several bits, but then converted to a boolean
state. But that conversion missed to replace the bit OR and bit check
operations all over the place, which creates suboptimal code. As of today
'state' is a misnomer because it's only purpose is to reflect whether the
timer is enqueued into the RB-tree or not. Rename it to 'is_queued' and
make all operations on it boolean.
This reduces text size from 8926 to 8732 bytes.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer.h | 31 +---------------------
include/linux/hrtimer_types.h | 12 ++++----
kernel/time/hrtimer.c | 58 ++++++++++++++++++++++++++++--------------
kernel/time/timer_list.c | 2 -
4 files changed, 49 insertions(+), 54 deletions(-)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -63,33 +63,6 @@ enum hrtimer_mode {
HRTIMER_MODE_REL_PINNED_HARD = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_HARD,
};
-/*
- * Values to track state of the timer
- *
- * Possible states:
- *
- * 0x00 inactive
- * 0x01 enqueued into rbtree
- *
- * The callback state is not part of the timer->state because clearing it would
- * mean touching the timer after the callback, this makes it impossible to free
- * the timer from the callback function.
- *
- * Therefore we track the callback state in:
- *
- * timer->base->cpu_base->running == timer
- *
- * On SMP it is possible to have a "callback function running and enqueued"
- * status. It happens for example when a posix timer expired and the callback
- * queued a signal. Between dropping the lock which protects the posix timer
- * and reacquiring the base lock of the hrtimer, another CPU can deliver the
- * signal and rearm the timer.
- *
- * All state transitions are protected by cpu_base->lock.
- */
-#define HRTIMER_STATE_INACTIVE 0x00
-#define HRTIMER_STATE_ENQUEUED 0x01
-
/**
* struct hrtimer_sleeper - simple sleeper structure
* @timer: embedded timer structure
@@ -300,8 +273,8 @@ extern bool hrtimer_active(const struct
*/
static inline bool hrtimer_is_queued(struct hrtimer *timer)
{
- /* The READ_ONCE pairs with the update functions of timer->state */
- return !!(READ_ONCE(timer->state) & HRTIMER_STATE_ENQUEUED);
+ /* The READ_ONCE pairs with the update functions of timer->is_queued */
+ return READ_ONCE(timer->is_queued);
}
/*
--- a/include/linux/hrtimer_types.h
+++ b/include/linux/hrtimer_types.h
@@ -28,7 +28,7 @@ enum hrtimer_restart {
* was armed.
* @function: timer expiry callback function
* @base: pointer to the timer base (per cpu and per clock)
- * @state: state information (See bit values above)
+ * @is_queued: Indicates whether a timer is enqueued or not
* @is_rel: Set if the timer was armed relative
* @is_soft: Set if hrtimer will be expired in soft interrupt context.
* @is_hard: Set if hrtimer will be expired in hard interrupt context
@@ -43,11 +43,11 @@ struct hrtimer {
ktime_t _softexpires;
enum hrtimer_restart (*__private function)(struct hrtimer *);
struct hrtimer_clock_base *base;
- u8 state;
- u8 is_rel;
- u8 is_soft;
- u8 is_hard;
- u8 is_lazy;
+ bool is_queued;
+ bool is_rel;
+ bool is_soft;
+ bool is_hard;
+ bool is_lazy;
};
#endif /* _LINUX_HRTIMER_TYPES_H */
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -50,6 +50,28 @@
#include "tick-internal.h"
/*
+ * Constants to set the queued state of the timer (INACTIVE, ENQUEUED)
+ *
+ * The callback state is kept separate in the CPU base because having it in
+ * the timer would required touching the timer after the callback, which
+ * makes it impossible to free the timer from the callback function.
+ *
+ * Therefore we track the callback state in:
+ *
+ * timer->base->cpu_base->running == timer
+ *
+ * On SMP it is possible to have a "callback function running and enqueued"
+ * status. It happens for example when a posix timer expired and the callback
+ * queued a signal. Between dropping the lock which protects the posix timer
+ * and reacquiring the base lock of the hrtimer, another CPU can deliver the
+ * signal and rearm the timer.
+ *
+ * All state transitions are protected by cpu_base->lock.
+ */
+#define HRTIMER_STATE_INACTIVE false
+#define HRTIMER_STATE_ENQUEUED true
+
+/*
* The resolution of the clocks. The resolution value is returned in
* the clock_getres() system call to give application programmers an
* idea of the (in)accuracy of timers. Timer values are rounded up to
@@ -1038,7 +1060,7 @@ u64 hrtimer_forward(struct hrtimer *time
if (delta < 0)
return 0;
- if (WARN_ON(timer->state & HRTIMER_STATE_ENQUEUED))
+ if (WARN_ON(timer->is_queued))
return 0;
if (interval < hrtimer_resolution)
@@ -1082,7 +1104,7 @@ static bool enqueue_hrtimer(struct hrtim
base->cpu_base->active_bases |= 1 << base->index;
/* Pairs with the lockless read in hrtimer_is_queued() */
- WRITE_ONCE(timer->state, HRTIMER_STATE_ENQUEUED);
+ WRITE_ONCE(timer->is_queued, HRTIMER_STATE_ENQUEUED);
return timerqueue_add(&base->active, &timer->node);
}
@@ -1096,18 +1118,18 @@ static bool enqueue_hrtimer(struct hrtim
* anyway (e.g. timer interrupt)
*/
static void __remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
- u8 newstate, bool reprogram)
+ bool newstate, bool reprogram)
{
struct hrtimer_cpu_base *cpu_base = base->cpu_base;
- u8 state = timer->state;
lockdep_assert_held(&cpu_base->lock);
- /* Pairs with the lockless read in hrtimer_is_queued() */
- WRITE_ONCE(timer->state, newstate);
- if (!(state & HRTIMER_STATE_ENQUEUED))
+ if (!timer->is_queued)
return;
+ /* Pairs with the lockless read in hrtimer_is_queued() */
+ WRITE_ONCE(timer->is_queued, newstate);
+
if (!timerqueue_del(&base->active, &timer->node))
cpu_base->active_bases &= ~(1 << base->index);
@@ -1127,11 +1149,11 @@ static void __remove_hrtimer(struct hrti
static inline bool remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
bool restart, bool keep_local)
{
- u8 state = timer->state;
+ bool queued_state = timer->is_queued;
lockdep_assert_held(&base->cpu_base->lock);
- if (state & HRTIMER_STATE_ENQUEUED) {
+ if (queued_state) {
bool reprogram;
debug_hrtimer_deactivate(timer);
@@ -1153,11 +1175,11 @@ static inline bool remove_hrtimer(struct
* and a moment later when it's requeued).
*/
if (!restart)
- state = HRTIMER_STATE_INACTIVE;
+ queued_state = HRTIMER_STATE_INACTIVE;
else
reprogram &= !keep_local;
- __remove_hrtimer(timer, base, state, reprogram);
+ __remove_hrtimer(timer, base, queued_state, reprogram);
return true;
}
return false;
@@ -1704,7 +1726,7 @@ bool hrtimer_active(const struct hrtimer
base = READ_ONCE(timer->base);
seq = raw_read_seqcount_begin(&base->seq);
- if (timer->state != HRTIMER_STATE_INACTIVE || base->running == timer)
+ if (timer->is_queued || base->running == timer)
return true;
} while (read_seqcount_retry(&base->seq, seq) || base != READ_ONCE(timer->base));
@@ -1721,7 +1743,7 @@ EXPORT_SYMBOL_GPL(hrtimer_active);
* - callback: the timer is being ran
* - post: the timer is inactive or (re)queued
*
- * On the read side we ensure we observe timer->state and cpu_base->running
+ * On the read side we ensure we observe timer->is_queued and cpu_base->running
* from the same section, if anything changed while we looked at it, we retry.
* This includes timer->base changing because sequence numbers alone are
* insufficient for that.
@@ -1744,11 +1766,11 @@ static void __run_hrtimer(struct hrtimer
base->running = timer;
/*
- * Separate the ->running assignment from the ->state assignment.
+ * Separate the ->running assignment from the ->is_queued assignment.
*
* As with a regular write barrier, this ensures the read side in
* hrtimer_active() cannot observe base->running == NULL &&
- * timer->state == INACTIVE.
+ * timer->is_queued == INACTIVE.
*/
raw_write_seqcount_barrier(&base->seq);
@@ -1787,15 +1809,15 @@ static void __run_hrtimer(struct hrtimer
* hrtimer_start_range_ns() can have popped in and enqueued the timer
* for us already.
*/
- if (restart != HRTIMER_NORESTART && !(timer->state & HRTIMER_STATE_ENQUEUED))
+ if (restart == HRTIMER_RESTART && !timer->is_queued)
enqueue_hrtimer(timer, base, HRTIMER_MODE_ABS, false);
/*
- * Separate the ->running assignment from the ->state assignment.
+ * Separate the ->running assignment from the ->is_queued assignment.
*
* As with a regular write barrier, this ensures the read side in
* hrtimer_active() cannot observe base->running.timer == NULL &&
- * timer->state == INACTIVE.
+ * timer->is_queued == INACTIVE.
*/
raw_write_seqcount_barrier(&base->seq);
--- a/kernel/time/timer_list.c
+++ b/kernel/time/timer_list.c
@@ -47,7 +47,7 @@ print_timer(struct seq_file *m, struct h
int idx, u64 now)
{
SEQ_printf(m, " #%d: <%p>, %ps", idx, taddr, ACCESS_PRIVATE(timer, function));
- SEQ_printf(m, ", S:%02x", timer->state);
+ SEQ_printf(m, ", S:%02x", timer->is_queued);
SEQ_printf(m, "\n");
SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n",
(unsigned long long)ktime_to_ns(hrtimer_get_softexpires(timer)), | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:23 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The decision whether to keep timers on the local CPU or on the CPU they are
associated to is suboptimal and causes the expensive switch_hrtimer_base()
mechanism to be invoked more than necessary. This is especially true for
pinned timers.
Rewrite the decision logic so that the current base is kept if:
1) The callback is running on the base
2) The timer is associated to the local CPU and the first expiring timer as
that allows to optimize for reprogramming avoidance
3) The timer is associated to the local CPU and pinned
4) The timer is associated to the local CPU and timer migration is
disabled.
Only #2 was covered by the original code, but especially #3 makes a
difference for high frequency rearming timers like the scheduler hrtick
timer. If timer migration is disabled, then #4 avoids most of the base
switches.
Signed-off-by: Thomas Gleixner <tglx@kernel.org
---
kernel/time/hrtimer.c | 101 ++++++++++++++++++++++++++++++++------------------
1 file changed, 65 insertions(+), 36 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1147,7 +1147,7 @@ static void __remove_hrtimer(struct hrti
}
static inline bool remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
- bool restart, bool keep_local)
+ bool restart, bool keep_base)
{
bool queued_state = timer->is_queued;
@@ -1177,7 +1177,7 @@ static inline bool remove_hrtimer(struct
if (!restart)
queued_state = HRTIMER_STATE_INACTIVE;
else
- reprogram &= !keep_local;
+ reprogram &= !keep_base;
__remove_hrtimer(timer, base, queued_state, reprogram);
return true;
@@ -1220,29 +1220,57 @@ static void hrtimer_update_softirq_timer
hrtimer_reprogram(cpu_base->softirq_next_timer, reprogram);
}
+#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+static __always_inline bool hrtimer_prefer_local(bool is_local, bool is_first, bool is_pinned)
+{
+ if (static_branch_likely(&timers_migration_enabled)) {
+ /*
+ * If it is local and the first expiring timer keep it on the local
+ * CPU to optimize reprogramming of the clockevent device. Also
+ * avoid switch_hrtimer_base() overhead when local and pinned.
+ */
+ if (!is_local)
+ return false;
+ return is_first || is_pinned;
+ }
+ return is_local;
+}
+#else
+static __always_inline bool hrtimer_prefer_local(bool is_local, bool is_first, bool is_pinned)
+{
+ return is_local;
+}
+#endif
+
+static inline bool hrtimer_keep_base(struct hrtimer *timer, bool is_local, bool is_first,
+ bool is_pinned)
+{
+ /* If the timer is running the callback it has to stay on its CPU base. */
+ if (unlikely(timer->base->running == timer))
+ return true;
+
+ return hrtimer_prefer_local(is_local, is_first, is_pinned);
+}
+
static bool __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, u64 delta_ns,
const enum hrtimer_mode mode, struct hrtimer_clock_base *base)
{
struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases);
- struct hrtimer_clock_base *new_base;
- bool force_local, first, was_armed;
+ bool is_pinned, first, was_first, was_armed, keep_base = false;
+ struct hrtimer_cpu_base *cpu_base = base->cpu_base;
- /*
- * If the timer is on the local cpu base and is the first expiring
- * timer then this might end up reprogramming the hardware twice
- * (on removal and on enqueue). To avoid that prevent the reprogram
- * on removal, keep the timer local to the current CPU and enforce
- * reprogramming after it is queued no matter whether it is the new
- * first expiring timer again or not.
- */
- force_local = base->cpu_base == this_cpu_base;
- force_local &= base->cpu_base->next_timer == timer;
+ was_first = cpu_base->next_timer == timer;
+ is_pinned = !!(mode & HRTIMER_MODE_PINNED);
/*
- * Don't force local queuing if this enqueue happens on a unplugged
- * CPU after hrtimer_cpu_dying() has been invoked.
+ * Don't keep it local if this enqueue happens on a unplugged CPU
+ * after hrtimer_cpu_dying() has been invoked.
*/
- force_local &= this_cpu_base->online;
+ if (likely(this_cpu_base->online)) {
+ bool is_local = cpu_base == this_cpu_base;
+
+ keep_base = hrtimer_keep_base(timer, is_local, was_first, is_pinned);
+ }
/*
* Remove an active timer from the queue. In case it is not queued
@@ -1254,8 +1282,11 @@ static bool __hrtimer_start_range_ns(str
* reprogramming later if it was the first expiring timer. This
* avoids programming the underlying clock event twice (once at
* removal and once after enqueue).
+ *
+ * @keep_base is also true if the timer callback is running on a
+ * remote CPU and for local pinned timers.
*/
- was_armed = remove_hrtimer(timer, base, true, force_local);
+ was_armed = remove_hrtimer(timer, base, true, keep_base);
if (mode & HRTIMER_MODE_REL)
tim = ktime_add_safe(tim, __hrtimer_cb_get_time(base->clockid));
@@ -1265,21 +1296,21 @@ static bool __hrtimer_start_range_ns(str
hrtimer_set_expires_range_ns(timer, tim, delta_ns);
/* Switch the timer base, if necessary: */
- if (!force_local)
- new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
- else
- new_base = base;
+ if (!keep_base) {
+ base = switch_hrtimer_base(timer, base, is_pinned);
+ cpu_base = base->cpu_base;
+ }
- first = enqueue_hrtimer(timer, new_base, mode, was_armed);
+ first = enqueue_hrtimer(timer, base, mode, was_armed);
/*
* If the hrtimer interrupt is running, then it will reevaluate the
* clock bases and reprogram the clock event device.
*/
- if (new_base->cpu_base->in_hrtirq)
+ if (cpu_base->in_hrtirq)
return false;
- if (!force_local) {
+ if (!was_first || cpu_base != this_cpu_base) {
/*
* If the current CPU base is online, then the timer is never
* queued on a remote CPU if it would be the first expiring
@@ -1288,7 +1319,7 @@ static bool __hrtimer_start_range_ns(str
* re-evaluate the first expiring timer after completing the
* callbacks.
*/
- if (hrtimer_base_is_online(this_cpu_base))
+ if (likely(hrtimer_base_is_online(this_cpu_base)))
return first;
/*
@@ -1296,11 +1327,8 @@ static bool __hrtimer_start_range_ns(str
* already offline. If the timer is the first to expire,
* kick the remote CPU to reprogram the clock event.
*/
- if (first) {
- struct hrtimer_cpu_base *new_cpu_base = new_base->cpu_base;
-
- smp_call_function_single_async(new_cpu_base->cpu, &new_cpu_base->csd);
- }
+ if (first)
+ smp_call_function_single_async(cpu_base->cpu, &cpu_base->csd);
return false;
}
@@ -1314,16 +1342,17 @@ static bool __hrtimer_start_range_ns(str
* required.
*/
if (timer->is_lazy) {
- if (new_base->cpu_base->expires_next <= hrtimer_get_expires(timer))
+ if (cpu_base->expires_next <= hrtimer_get_expires(timer))
return false;
}
/*
- * Timer was forced to stay on the current CPU to avoid
- * reprogramming on removal and enqueue. Force reprogram the
- * hardware by evaluating the new first expiring timer.
+ * Timer was the first expiring timer and forced to stay on the
+ * current CPU to avoid reprogramming on removal and enqueue. Force
+ * reprogram the hardware by evaluating the new first expiring
+ * timer.
*/
- hrtimer_force_reprogram(new_base->cpu_base, /* skip_equal */ true);
+ hrtimer_force_reprogram(cpu_base, /* skip_equal */ true);
return false;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:28 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The decision to keep a timer which is associated to the local CPU on that
CPU does not take NOHZ information into account. As a result there are a
lot of hrtimer base switch invocations which end up not switching the base
and stay on the local CPU. That's just work for nothing and can be further
improved.
If the local CPU is part of the NOISE housekeeping mask, then check:
1) Whether the local CPU has the tick running, which means it is
either not idle or already expecting a timer soon.
2) Whether the tick is stopped and need_resched() is set, which
means the CPU is about to exit idle.
This reduces the amount of hrtimer base switch attempts, which end up on
the local CPU anyway, significantly and prepares for further optimizations.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1231,7 +1231,18 @@ static __always_inline bool hrtimer_pref
*/
if (!is_local)
return false;
- return is_first || is_pinned;
+ if (is_first || is_pinned)
+ return true;
+
+ /* Honour the NOHZ full restrictions */
+ if (!housekeeping_cpu(smp_processor_id(), HK_TYPE_KERNEL_NOISE))
+ return false;
+
+ /*
+ * If the tick is not stopped or need_resched() is set, then
+ * there is no point in moving the timer somewhere else.
+ */
+ return !tick_nohz_tick_stopped() || need_resched();
}
return is_local;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:33 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | As the base switch can be avoided completely when the base stays the same
the remove/enqueue handling can be more streamlined.
Split it out into a separate function which handles both in one go which is
way more efficient and makes the code simpler to follow.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 72 +++++++++++++++++++++++++++++---------------------
1 file changed, 43 insertions(+), 29 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1147,13 +1147,11 @@ static void __remove_hrtimer(struct hrti
}
static inline bool remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
- bool restart, bool keep_base)
+ bool newstate)
{
- bool queued_state = timer->is_queued;
-
lockdep_assert_held(&base->cpu_base->lock);
- if (queued_state) {
+ if (timer->is_queued) {
bool reprogram;
debug_hrtimer_deactivate(timer);
@@ -1168,23 +1166,35 @@ static inline bool remove_hrtimer(struct
*/
reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
- /*
- * If the timer is not restarted then reprogramming is
- * required if the timer is local. If it is local and about
- * to be restarted, avoid programming it twice (on removal
- * and a moment later when it's requeued).
- */
- if (!restart)
- queued_state = HRTIMER_STATE_INACTIVE;
- else
- reprogram &= !keep_base;
-
- __remove_hrtimer(timer, base, queued_state, reprogram);
+ __remove_hrtimer(timer, base, newstate, reprogram);
return true;
}
return false;
}
+static inline bool
+remove_and_enqueue_same_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ const enum hrtimer_mode mode, ktime_t expires, u64 delta_ns)
+{
+ /* Remove it from the timer queue if active */
+ if (timer->is_queued) {
+ debug_hrtimer_deactivate(timer);
+ timerqueue_del(&base->active, &timer->node);
+ }
+
+ /* Set the new expiry time */
+ hrtimer_set_expires_range_ns(timer, expires, delta_ns);
+
+ debug_activate(timer, mode, timer->is_queued);
+ base->cpu_base->active_bases |= 1 << base->index;
+
+ /* Pairs with the lockless read in hrtimer_is_queued() */
+ WRITE_ONCE(timer->is_queued, HRTIMER_STATE_ENQUEUED);
+
+ /* Returns true if this is the first expiring timer */
+ return timerqueue_add(&base->active, &timer->node);
+}
+
static inline ktime_t hrtimer_update_lowres(struct hrtimer *timer, ktime_t tim,
const enum hrtimer_mode mode)
{
@@ -1267,7 +1277,7 @@ static bool __hrtimer_start_range_ns(str
const enum hrtimer_mode mode, struct hrtimer_clock_base *base)
{
struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases);
- bool is_pinned, first, was_first, was_armed, keep_base = false;
+ bool is_pinned, first, was_first, keep_base = false;
struct hrtimer_cpu_base *cpu_base = base->cpu_base;
was_first = cpu_base->next_timer == timer;
@@ -1283,6 +1293,12 @@ static bool __hrtimer_start_range_ns(str
keep_base = hrtimer_keep_base(timer, is_local, was_first, is_pinned);
}
+ /* Calculate absolute expiry time for relative timers */
+ if (mode & HRTIMER_MODE_REL)
+ tim = ktime_add_safe(tim, __hrtimer_cb_get_time(base->clockid));
+ /* Compensate for low resolution granularity */
+ tim = hrtimer_update_lowres(timer, tim, mode);
+
/*
* Remove an active timer from the queue. In case it is not queued
* on the current CPU, make sure that remove_hrtimer() updates the
@@ -1297,22 +1313,20 @@ static bool __hrtimer_start_range_ns(str
* @keep_base is also true if the timer callback is running on a
* remote CPU and for local pinned timers.
*/
- was_armed = remove_hrtimer(timer, base, true, keep_base);
-
- if (mode & HRTIMER_MODE_REL)
- tim = ktime_add_safe(tim, __hrtimer_cb_get_time(base->clockid));
-
- tim = hrtimer_update_lowres(timer, tim, mode);
+ if (likely(keep_base)) {
+ first = remove_and_enqueue_same_base(timer, base, mode, tim, delta_ns);
+ } else {
+ /* Keep the ENQUEUED state in case it is queued */
+ bool was_armed = remove_hrtimer(timer, base, HRTIMER_STATE_ENQUEUED);
- hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+ hrtimer_set_expires_range_ns(timer, tim, delta_ns);
- /* Switch the timer base, if necessary: */
- if (!keep_base) {
+ /* Switch the timer base, if necessary: */
base = switch_hrtimer_base(timer, base, is_pinned);
cpu_base = base->cpu_base;
- }
- first = enqueue_hrtimer(timer, base, mode, was_armed);
+ first = enqueue_hrtimer(timer, base, mode, was_armed);
+ }
/*
* If the hrtimer interrupt is running, then it will reevaluate the
@@ -1432,7 +1446,7 @@ int hrtimer_try_to_cancel(struct hrtimer
base = lock_hrtimer_base(timer, &flags);
if (!hrtimer_callback_running(timer)) {
- ret = remove_hrtimer(timer, base, false, false);
+ ret = remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE);
if (ret)
trace_hrtimer_cancel(timer);
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:38 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Analyzing the reprogramming of the clock event device is essential to debug
the behaviour of the hrtimer subsystem especially with the upcoming
deferred rearming scheme.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/trace/events/timer.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
--- a/include/trace/events/timer.h
+++ b/include/trace/events/timer.h
@@ -325,6 +325,30 @@ DEFINE_EVENT(hrtimer_class, hrtimer_canc
);
/**
+ * hrtimer_rearm - Invoked when the clockevent device is rearmed
+ * @next_event: The next expiry time (CLOCK_MONOTONIC)
+ */
+TRACE_EVENT(hrtimer_rearm,
+
+ TP_PROTO(ktime_t next_event, bool deferred),
+
+ TP_ARGS(next_event, deferred),
+
+ TP_STRUCT__entry(
+ __field( s64, next_event )
+ __field( bool, deferred )
+ ),
+
+ TP_fast_assign(
+ __entry->next_event = next_event;
+ __entry->deferred = deferred;
+ ),
+
+ TP_printk("next_event=%llu deferred=%d",
+ (unsigned long long) __entry->next_event, __entry->deferred)
+);
+
+/**
* itimer_state - called when itimer is started or canceled
* @which: name of the interval timer
* @value: the itimers value, itimer is canceled if value->it_value is | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:43 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
Rework hrtimer_interrupt() such that reprogramming is split out into an
independent function at the end of the interrupt.
This prepares for reprogramming getting delayed beyond the end of
hrtimer_interrupt().
Notably, this changes the hang handling to always wait 100ms instead of
trying to keep it proportional to the actual delay. This simplifies the
state, also this really shouldn't be happening.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
tglx: Added the tracepoint and used a proper naming convention
---
kernel/time/hrtimer.c | 93 +++++++++++++++++++++++---------------------------
1 file changed, 44 insertions(+), 49 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -690,6 +690,12 @@ static inline int hrtimer_hres_active(st
cpu_base->hres_active : 0;
}
+static inline void hrtimer_rearm_event(ktime_t expires_next, bool deferred)
+{
+ trace_hrtimer_rearm(expires_next, deferred);
+ tick_program_event(expires_next, 1);
+}
+
static void __hrtimer_reprogram(struct hrtimer_cpu_base *cpu_base, struct hrtimer *next_timer,
ktime_t expires_next)
{
@@ -715,7 +721,7 @@ static void __hrtimer_reprogram(struct h
if (!hrtimer_hres_active(cpu_base) || cpu_base->hang_detected)
return;
- tick_program_event(expires_next, 1);
+ hrtimer_rearm_event(expires_next, false);
}
/*
@@ -1939,6 +1945,28 @@ static __latent_entropy void hrtimer_run
#ifdef CONFIG_HIGH_RES_TIMERS
/*
+ * Very similar to hrtimer_force_reprogram(), except it deals with
+ * in_hrtirq and hang_detected.
+ */
+static void hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now)
+{
+ ktime_t expires_next = hrtimer_update_next_event(cpu_base);
+
+ cpu_base->expires_next = expires_next;
+ cpu_base->in_hrtirq = false;
+
+ if (unlikely(cpu_base->hang_detected)) {
+ /*
+ * Give the system a chance to do something else than looping
+ * on hrtimer interrupts.
+ */
+ expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC);
+ cpu_base->hang_detected = false;
+ }
+ hrtimer_rearm_event(expires_next, false);
+}
+
+/*
* High resolution timer interrupt
* Called with interrupts disabled
*/
@@ -1973,63 +2001,30 @@ void hrtimer_interrupt(struct clock_even
__hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD);
- /* Reevaluate the clock bases for the [soft] next expiry */
- expires_next = hrtimer_update_next_event(cpu_base);
- /*
- * Store the new expiry value so the migration code can verify
- * against it.
- */
- cpu_base->expires_next = expires_next;
- cpu_base->in_hrtirq = false;
- raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
-
- /* Reprogramming necessary ? */
- if (!tick_program_event(expires_next, 0)) {
- cpu_base->hang_detected = false;
- return;
- }
-
/*
* The next timer was already expired due to:
* - tracing
* - long lasting callbacks
* - being scheduled away when running in a VM
*
- * We need to prevent that we loop forever in the hrtimer
- * interrupt routine. We give it 3 attempts to avoid
- * overreacting on some spurious event.
- *
- * Acquire base lock for updating the offsets and retrieving
- * the current time.
+ * We need to prevent that we loop forever in the hrtiner interrupt
+ * routine. We give it 3 attempts to avoid overreacting on some
+ * spurious event.
*/
- raw_spin_lock_irqsave(&cpu_base->lock, flags);
now = hrtimer_update_base(cpu_base);
- cpu_base->nr_retries++;
- if (++retries < 3)
- goto retry;
- /*
- * Give the system a chance to do something else than looping
- * here. We stored the entry time, so we know exactly how long
- * we spent here. We schedule the next event this amount of
- * time away.
- */
- cpu_base->nr_hangs++;
- cpu_base->hang_detected = true;
- raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
+ expires_next = hrtimer_update_next_event(cpu_base);
+ if (expires_next < now) {
+ if (++retries < 3)
+ goto retry;
- delta = ktime_sub(now, entry_time);
- if ((unsigned int)delta > cpu_base->max_hang_time)
- cpu_base->max_hang_time = (unsigned int) delta;
- /*
- * Limit it to a sensible value as we enforce a longer
- * delay. Give the CPU at least 100ms to catch up.
- */
- if (delta > 100 * NSEC_PER_MSEC)
- expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC);
- else
- expires_next = ktime_add(now, delta);
- tick_program_event(expires_next, 1);
- pr_warn_once("hrtimer: interrupt took %llu ns\n", ktime_to_ns(delta));
+ delta = ktime_sub(now, entry_time);
+ cpu_base->max_hang_time = max_t(unsigned int, cpu_base->max_hang_time, delta);
+ cpu_base->nr_hangs++;
+ cpu_base->hang_detected = true;
+ }
+
+ hrtimer_rearm(cpu_base, now);
+ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
}
#endif /* !CONFIG_HIGH_RES_TIMERS */ | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:48 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The upcoming deferred rearming scheme has the same effect as the deferred
rearming when the hrtimer interrupt is executing. So it can reuse the
in_hrtirq flag, but when it gets deferred beyond the hrtimer interrupt
path, then the name does not make sense anymore.
Rename it to deferred_rearm upfront to keep the actual functional change
separate from the mechanical rename churn.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer_defs.h | 4 ++--
kernel/time/hrtimer.c | 28 +++++++++-------------------
2 files changed, 11 insertions(+), 21 deletions(-)
--- a/include/linux/hrtimer_defs.h
+++ b/include/linux/hrtimer_defs.h
@@ -53,7 +53,7 @@ enum hrtimer_base_type {
* @active_bases: Bitfield to mark bases with active timers
* @clock_was_set_seq: Sequence counter of clock was set events
* @hres_active: State of high resolution mode
- * @in_hrtirq: hrtimer_interrupt() is currently executing
+ * @deferred_rearm: A deferred rearm is pending
* @hang_detected: The last hrtimer interrupt detected a hang
* @softirq_activated: displays, if the softirq is raised - update of softirq
* related settings is not required then.
@@ -84,7 +84,7 @@ struct hrtimer_cpu_base {
unsigned int active_bases;
unsigned int clock_was_set_seq;
bool hres_active;
- bool in_hrtirq;
+ bool deferred_rearm;
bool hang_detected;
bool softirq_activated;
bool online;
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -883,11 +883,8 @@ static void hrtimer_reprogram(struct hrt
if (expires >= cpu_base->expires_next)
return;
- /*
- * If the hrtimer interrupt is running, then it will reevaluate the
- * clock bases and reprogram the clock event device.
- */
- if (cpu_base->in_hrtirq)
+ /* If a deferred rearm is pending skip reprogramming the device */
+ if (cpu_base->deferred_rearm)
return;
cpu_base->next_timer = timer;
@@ -921,12 +918,8 @@ static bool update_needs_ipi(struct hrti
if (seq == cpu_base->clock_was_set_seq)
return false;
- /*
- * If the remote CPU is currently handling an hrtimer interrupt, it
- * will reevaluate the first expiring timer of all clock bases
- * before reprogramming. Nothing to do here.
- */
- if (cpu_base->in_hrtirq)
+ /* If a deferred rearm is pending the remote CPU will take care of it */
+ if (cpu_base->deferred_rearm)
return false;
/*
@@ -1334,11 +1327,8 @@ static bool __hrtimer_start_range_ns(str
first = enqueue_hrtimer(timer, base, mode, was_armed);
}
- /*
- * If the hrtimer interrupt is running, then it will reevaluate the
- * clock bases and reprogram the clock event device.
- */
- if (cpu_base->in_hrtirq)
+ /* If a deferred rearm is pending skip reprogramming the device */
+ if (cpu_base->deferred_rearm)
return false;
if (!was_first || cpu_base != this_cpu_base) {
@@ -1946,14 +1936,14 @@ static __latent_entropy void hrtimer_run
/*
* Very similar to hrtimer_force_reprogram(), except it deals with
- * in_hrtirq and hang_detected.
+ * deferred_rearm and hang_detected.
*/
static void hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now)
{
ktime_t expires_next = hrtimer_update_next_event(cpu_base);
cpu_base->expires_next = expires_next;
- cpu_base->in_hrtirq = false;
+ cpu_base->deferred_rearm = false;
if (unlikely(cpu_base->hang_detected)) {
/*
@@ -1984,7 +1974,7 @@ void hrtimer_interrupt(struct clock_even
raw_spin_lock_irqsave(&cpu_base->lock, flags);
entry_time = now = hrtimer_update_base(cpu_base);
retry:
- cpu_base->in_hrtirq = true;
+ cpu_base->deferred_rearm = true;
/*
* Set expires_next to KTIME_MAX, which prevents that remote CPUs queue
* timers while __hrtimer_run_queues() is expiring the clock bases. | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:53 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The hrtimer interrupt expires timers and at the end of the interrupt it
rearms the clockevent device for the next expiring timer.
That's obviously correct, but in the case that a expired timer set
NEED_RESCHED the return from interrupt ends up in schedule(). If HRTICK is
enabled then schedule() will modify the hrtick timer, which causes another
reprogramming of the hardware.
That can be avoided by deferring the rearming to the return from interrupt
path and if the return results in a immediate schedule() invocation then it
can be deferred until the end of schedule().
To make this correct the affected code parts need to be made aware of this.
Provide empty stubs for the deferred rearming mechanism, so that the
relevant code changes for entry, softirq and scheduler can be split up into
separate changes independent of the actual enablement in the hrtimer code.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
tglx: Split out to make it simpler to review and to make cross subsystem
merge logistics trivial.
---
include/linux/hrtimer.h | 1 +
include/linux/hrtimer_rearm.h | 21 +++++++++++++++++++++
kernel/time/Kconfig | 4 ++++
3 files changed, 26 insertions(+)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -13,6 +13,7 @@
#define _LINUX_HRTIMER_H
#include <linux/hrtimer_defs.h>
+#include <linux/hrtimer_rearm.h>
#include <linux/hrtimer_types.h>
#include <linux/init.h>
#include <linux/list.h>
--- /dev/null
+++ b/include/linux/hrtimer_rearm.h
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef _LINUX_HRTIMER_REARM_H
+#define _LINUX_HRTIMER_REARM_H
+
+#ifdef CONFIG_HRTIMER_REARM_DEFERRED
+static __always_inline void __hrtimer_rearm_deferred(void) { }
+static __always_inline void hrtimer_rearm_deferred(void) { }
+static __always_inline void hrtimer_rearm_deferred_tif(unsigned long tif_work) { }
+static __always_inline bool
+hrtimer_rearm_deferred_user_irq(unsigned long *tif_work, const unsigned long tif_mask) { return false; }
+static __always_inline bool hrtimer_test_and_clear_rearm_deferred(void) { return false; }
+#else /* CONFIG_HRTIMER_REARM_DEFERRED */
+static __always_inline void __hrtimer_rearm_deferred(void) { }
+static __always_inline void hrtimer_rearm_deferred(void) { }
+static __always_inline void hrtimer_rearm_deferred_tif(unsigned long tif_work) { }
+static __always_inline bool
+hrtimer_rearm_deferred_user_irq(unsigned long *tif_work, const unsigned long tif_mask) { return false; }
+static __always_inline bool hrtimer_test_and_clear_rearm_deferred(void) { return false; }
+#endif /* !CONFIG_HRTIMER_REARM_DEFERRED */
+
+#endif
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -58,6 +58,10 @@ config GENERIC_CLOCKEVENTS_COUPLED_INLIN
config GENERIC_CMOS_UPDATE
bool
+# Deferred rearming of the hrtimer interrupt
+config HRTIMER_REARM_DEFERRED
+ def_bool n
+
# Select to handle posix CPU timers from task_work
# and not from the timer interrupt context
config HAVE_POSIX_CPU_TIMERS_TASK_WORK | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:37:58 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The hrtimer interrupt expires timers and at the end of the interrupt it
rearms the clockevent device for the next expiring timer.
That's obviously correct, but in the case that a expired timer sets
NEED_RESCHED the return from interrupt ends up in schedule(). If HRTICK is
enabled then schedule() will modify the hrtick timer, which causes another
reprogramming of the hardware.
That can be avoided by deferring the rearming to the return from interrupt
path and if the return results in a immediate schedule() invocation then it
can be deferred until the end of schedule(), which avoids multiple rearms
and re-evaluation of the timer wheel.
As this is only relevant for interrupt to user return split the work masks
up and hand them in as arguments from the relevant exit to user functions,
which allows the compiler to optimize the deferred handling out for the
syscall exit to user case.
Add the rearm checks to the approritate places in the exit to user loop and
the interrupt return to kernel path, so that the rearming is always
guaranteed.
In the return to user space path this is handled in the same way as
TIF_RSEQ to avoid extra instructions in the fast path, which are truly
hurtful for device interrupt heavy work loads as the extra instructions and
conditionals while benign at first sight accumulate quickly into measurable
regressions. The return from syscall path is completely unaffected due to
the above mentioned split so syscall heavy workloads wont have any extra
burden.
For now this is just placing empty stubs at the right places which are all
optimized out by the compiler until the actual functionality is in place.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
tglx: Split out to make it simpler to review and to make cross subsystem
merge logistics trivial.
---
include/linux/irq-entry-common.h | 25 +++++++++++++++++++------
include/linux/rseq_entry.h | 16 +++++++++++++---
kernel/entry/common.c | 4 +++-
3 files changed, 35 insertions(+), 10 deletions(-)
--- a/include/linux/irq-entry-common.h
+++ b/include/linux/irq-entry-common.h
@@ -3,6 +3,7 @@
#define __LINUX_IRQENTRYCOMMON_H
#include <linux/context_tracking.h>
+#include <linux/hrtimer_rearm.h>
#include <linux/kmsan.h>
#include <linux/rseq_entry.h>
#include <linux/static_call_types.h>
@@ -33,6 +34,14 @@
_TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL | _TIF_RSEQ | \
ARCH_EXIT_TO_USER_MODE_WORK)
+#ifdef CONFIG_HRTIMER_REARM_DEFERRED
+# define EXIT_TO_USER_MODE_WORK_SYSCALL (EXIT_TO_USER_MODE_WORK)
+# define EXIT_TO_USER_MODE_WORK_IRQ (EXIT_TO_USER_MODE_WORK | _TIF_HRTIMER_REARM)
+#else
+# define EXIT_TO_USER_MODE_WORK_SYSCALL (EXIT_TO_USER_MODE_WORK)
+# define EXIT_TO_USER_MODE_WORK_IRQ (EXIT_TO_USER_MODE_WORK)
+#endif
+
/**
* arch_enter_from_user_mode - Architecture specific sanity check for user mode regs
* @regs: Pointer to currents pt_regs
@@ -203,6 +212,7 @@ unsigned long exit_to_user_mode_loop(str
/**
* __exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required
* @regs: Pointer to pt_regs on entry stack
+ * @work_mask: Which TIF bits need to be evaluated
*
* 1) check that interrupts are disabled
* 2) call tick_nohz_user_enter_prepare()
@@ -212,7 +222,8 @@ unsigned long exit_to_user_mode_loop(str
*
* Don't invoke directly, use the syscall/irqentry_ prefixed variants below
*/
-static __always_inline void __exit_to_user_mode_prepare(struct pt_regs *regs)
+static __always_inline void __exit_to_user_mode_prepare(struct pt_regs *regs,
+ const unsigned long work_mask)
{
unsigned long ti_work;
@@ -222,8 +233,10 @@ static __always_inline void __exit_to_us
tick_nohz_user_enter_prepare();
ti_work = read_thread_flags();
- if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK))
- ti_work = exit_to_user_mode_loop(regs, ti_work);
+ if (unlikely(ti_work & work_mask)) {
+ if (!hrtimer_rearm_deferred_user_irq(&ti_work, work_mask))
+ ti_work = exit_to_user_mode_loop(regs, ti_work);
+ }
arch_exit_to_user_mode_prepare(regs, ti_work);
}
@@ -239,7 +252,7 @@ static __always_inline void __exit_to_us
/* Temporary workaround to keep ARM64 alive */
static __always_inline void exit_to_user_mode_prepare_legacy(struct pt_regs *regs)
{
- __exit_to_user_mode_prepare(regs);
+ __exit_to_user_mode_prepare(regs, EXIT_TO_USER_MODE_WORK);
rseq_exit_to_user_mode_legacy();
__exit_to_user_mode_validate();
}
@@ -253,7 +266,7 @@ static __always_inline void exit_to_user
*/
static __always_inline void syscall_exit_to_user_mode_prepare(struct pt_regs *regs)
{
- __exit_to_user_mode_prepare(regs);
+ __exit_to_user_mode_prepare(regs, EXIT_TO_USER_MODE_WORK_SYSCALL);
rseq_syscall_exit_to_user_mode();
__exit_to_user_mode_validate();
}
@@ -267,7 +280,7 @@ static __always_inline void syscall_exit
*/
static __always_inline void irqentry_exit_to_user_mode_prepare(struct pt_regs *regs)
{
- __exit_to_user_mode_prepare(regs);
+ __exit_to_user_mode_prepare(regs, EXIT_TO_USER_MODE_WORK_IRQ);
rseq_irqentry_exit_to_user_mode();
__exit_to_user_mode_validate();
}
--- a/include/linux/rseq_entry.h
+++ b/include/linux/rseq_entry.h
@@ -40,6 +40,7 @@ DECLARE_PER_CPU(struct rseq_stats, rseq_
#endif /* !CONFIG_RSEQ_STATS */
#ifdef CONFIG_RSEQ
+#include <linux/hrtimer_rearm.h>
#include <linux/jump_label.h>
#include <linux/rseq.h>
#include <linux/sched/signal.h>
@@ -110,7 +111,7 @@ static __always_inline void rseq_slice_c
t->rseq.slice.state.granted = false;
}
-static __always_inline bool rseq_grant_slice_extension(bool work_pending)
+static __always_inline bool __rseq_grant_slice_extension(bool work_pending)
{
struct task_struct *curr = current;
struct rseq_slice_ctrl usr_ctrl;
@@ -215,11 +216,20 @@ static __always_inline bool rseq_grant_s
return false;
}
+static __always_inline bool rseq_grant_slice_extension(unsigned long ti_work, unsigned long mask)
+{
+ if (unlikely(__rseq_grant_slice_extension(ti_work & mask))) {
+ hrtimer_rearm_deferred_tif(ti_work);
+ return true;
+ }
+ return false;
+}
+
#else /* CONFIG_RSEQ_SLICE_EXTENSION */
static inline bool rseq_slice_extension_enabled(void) { return false; }
static inline bool rseq_arm_slice_extension_timer(void) { return false; }
static inline void rseq_slice_clear_grant(struct task_struct *t) { }
-static inline bool rseq_grant_slice_extension(bool work_pending) { return false; }
+static inline bool rseq_grant_slice_extension(unsigned long ti_work, unsigned long mask) { return false; }
#endif /* !CONFIG_RSEQ_SLICE_EXTENSION */
bool rseq_debug_update_user_cs(struct task_struct *t, struct pt_regs *regs, unsigned long csaddr);
@@ -778,7 +788,7 @@ static inline void rseq_syscall_exit_to_
static inline void rseq_irqentry_exit_to_user_mode(void) { }
static inline void rseq_exit_to_user_mode_legacy(void) { }
static inline void rseq_debug_syscall_return(struct pt_regs *regs) { }
-static inline bool rseq_grant_slice_extension(bool work_pending) { return false; }
+static inline bool rseq_grant_slice_extension(unsigned long ti_work, unsigned long mask) { return false; }
#endif /* !CONFIG_RSEQ */
#endif /* _LINUX_RSEQ_ENTRY_H */
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -50,7 +50,7 @@ static __always_inline unsigned long __e
local_irq_enable_exit_to_user(ti_work);
if (ti_work & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) {
- if (!rseq_grant_slice_extension(ti_work & TIF_SLICE_EXT_DENY))
+ if (!rseq_grant_slice_extension(ti_work, TIF_SLICE_EXT_DENY))
schedule();
}
@@ -225,6 +225,7 @@ noinstr void irqentry_exit(struct pt_reg
*/
if (state.exit_rcu) {
instrumentation_begin();
+ hrtimer_rearm_deferred();
/* Tell the tracer that IRET will enable interrupts */
trace_hardirqs_on_prepare();
lockdep_hardirqs_on_prepare();
@@ -238,6 +239,7 @@ noinstr void irqentry_exit(struct pt_reg
if (IS_ENABLED(CONFIG_PREEMPTION))
irqentry_exit_cond_resched();
+ hrtimer_rearm_deferred();
/* Covers both tracing and lockdep */
trace_hardirqs_on();
instrumentation_end(); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:03 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The hrtimer interrupt expires timers and at the end of the interrupt it
rearms the clockevent device for the next expiring timer.
That's obviously correct, but in the case that a expired timer sets
NEED_RESCHED the return from interrupt ends up in schedule(). If HRTICK is
enabled then schedule() will modify the hrtick timer, which causes another
reprogramming of the hardware.
That can be avoided by deferring the rearming to the return from interrupt
path and if the return results in a immediate schedule() invocation then it
can be deferred until the end of schedule(), which avoids multiple rearms
and re-evaluation of the timer wheel.
In case that the return from interrupt ends up handling softirqs before
reaching the rearm conditions in the return to user entry code functions, a
deferred rearm has to be handled before softirq handling enables interrupts
as soft interrupt handling can be long and would therefore introduce hard
to diagnose latencies to the timer interrupt.
Place the for now empty stub call right before invoking the softirq
handling routine.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
tglx: Split out to make it simpler to review and to make cross subsystem
merge logistics trivial.
---
kernel/softirq.c | 15 ++++++++++++++-
kernel/softirq.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -663,6 +663,13 @@ void irq_enter_rcu(void)
{
__irq_enter_raw();
+ /*
+ * If this is a nested interrupt that hits the exit_to_user_mode_loop
+ * where it has enabled interrupts but before it has hit schedule() we
+ * could have hrtimers in an undefined state. Fix it up here.
+ */
+ hrtimer_rearm_deferred();
+
if (tick_nohz_full_cpu(smp_processor_id()) ||
(is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET)))
tick_irq_enter();
@@ -719,8 +726,14 @@ static inline void __irq_exit_rcu(void)
#endif
account_hardirq_exit(current);
preempt_count_sub(HARDIRQ_OFFSET);
- if (!in_interrupt() && local_softirq_pending())
+ if (!in_interrupt() && local_softirq_pending()) {
+ /*
+ * If we left hrtimers unarmed, make sure to arm them now,
+ * before enabling interrupts to run SoftIRQ.
+ */
+ hrtimer_rearm_deferred();
invoke_softirq();
+ }
if (IS_ENABLED(CONFIG_IRQ_FORCED_THREADING) && force_irqthreads() &&
local_timers_pending_force_th() && !(in_nmi() | in_hardirq())) | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:07 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The hrtimer interrupt expires timers and at the end of the interrupt it
rearms the clockevent device for the next expiring timer.
That's obviously correct, but in the case that a expired timer sets
NEED_RESCHED the return from interrupt ends up in schedule(). If HRTICK is
enabled then schedule() will modify the hrtick timer, which causes another
reprogramming of the hardware.
That can be avoided by deferring the rearming to the return from interrupt
path and if the return results in a immediate schedule() invocation then it
can be deferred until the end of schedule(), which avoids multiple rearms
and re-evaluation of the timer wheel.
Add the rearm checks to the existing sched_hrtick_enter/exit() functions,
which already handle the batched rearm of the hrtick timer.
For now this is just placing empty stubs at the right places which are all
optimized out by the compiler until the guard condition becomes true.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
tglx: Split out to make it simpler to review and to make cross subsystem
merge logistics trivial.
---
kernel/sched/core.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -876,6 +876,7 @@ enum {
HRTICK_SCHED_NONE = 0,
HRTICK_SCHED_DEFER = BIT(1),
HRTICK_SCHED_START = BIT(2),
+ HRTICK_SCHED_REARM_HRTIMER = BIT(3)
};
static void hrtick_clear(struct rq *rq)
@@ -974,6 +975,8 @@ void hrtick_start(struct rq *rq, u64 del
static inline void hrtick_schedule_enter(struct rq *rq)
{
rq->hrtick_sched = HRTICK_SCHED_DEFER;
+ if (hrtimer_test_and_clear_rearm_deferred())
+ rq->hrtick_sched |= HRTICK_SCHED_REARM_HRTIMER;
}
static inline void hrtick_schedule_exit(struct rq *rq)
@@ -991,6 +994,9 @@ static inline void hrtick_schedule_exit(
hrtimer_cancel(&rq->hrtick_timer);
}
+ if (rq->hrtick_sched & HRTICK_SCHED_REARM_HRTIMER)
+ __hrtimer_rearm_deferred();
+
rq->hrtick_sched = HRTICK_SCHED_NONE;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:12 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
Currently hrtimer_interrupt() runs expired timers, which can re-arm
themselves, after which it computes the next expiration time and
re-programs the hardware.
However, things like HRTICK, a highres timer driving preemption, cannot
re-arm itself at the point of running, since the next task has not been
determined yet. The schedule() in the interrupt return path will switch to
the next task, which then causes a new hrtimer to be programmed.
This then results in reprogramming the hardware at least twice, once after
running the timers, and once upon selecting the new task.
Notably, *both* events happen in the interrupt.
By pushing the hrtimer reprogram all the way into the interrupt return
path, it runs after schedule() picks the new task and the double reprogram
can be avoided.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/asm-generic/thread_info_tif.h | 5 +-
include/linux/hrtimer_rearm.h | 72 +++++++++++++++++++++++++++++++---
kernel/time/Kconfig | 4 +
kernel/time/hrtimer.c | 38 +++++++++++++++--
4 files changed, 107 insertions(+), 12 deletions(-)
--- a/include/asm-generic/thread_info_tif.h
+++ b/include/asm-generic/thread_info_tif.h
@@ -41,11 +41,14 @@
#define _TIF_PATCH_PENDING BIT(TIF_PATCH_PENDING)
#ifdef HAVE_TIF_RESTORE_SIGMASK
-# define TIF_RESTORE_SIGMASK 10 // Restore signal mask in do_signal() */
+# define TIF_RESTORE_SIGMASK 10 // Restore signal mask in do_signal()
# define _TIF_RESTORE_SIGMASK BIT(TIF_RESTORE_SIGMASK)
#endif
#define TIF_RSEQ 11 // Run RSEQ fast path
#define _TIF_RSEQ BIT(TIF_RSEQ)
+#define TIF_HRTIMER_REARM 12 // re-arm the timer
+#define _TIF_HRTIMER_REARM BIT(TIF_HRTIMER_REARM)
+
#endif /* _ASM_GENERIC_THREAD_INFO_TIF_H_ */
--- a/include/linux/hrtimer_rearm.h
+++ b/include/linux/hrtimer_rearm.h
@@ -3,12 +3,74 @@
#define _LINUX_HRTIMER_REARM_H
#ifdef CONFIG_HRTIMER_REARM_DEFERRED
-static __always_inline void __hrtimer_rearm_deferred(void) { }
-static __always_inline void hrtimer_rearm_deferred(void) { }
-static __always_inline void hrtimer_rearm_deferred_tif(unsigned long tif_work) { }
+#include <linux/thread_info.h>
+
+void __hrtimer_rearm_deferred(void);
+
+/*
+ * This is purely CPU local, so check the TIF bit first to avoid the overhead of
+ * the atomic test_and_clear_bit() operation for the common case where the bit
+ * is not set.
+ */
+static __always_inline bool hrtimer_test_and_clear_rearm_deferred_tif(unsigned long tif_work)
+{
+ lockdep_assert_irqs_disabled();
+
+ if (unlikely(tif_work & _TIF_HRTIMER_REARM)) {
+ clear_thread_flag(TIF_HRTIMER_REARM);
+ return true;
+ }
+ return false;
+}
+
+#define TIF_REARM_MASK (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY | _TIF_HRTIMER_REARM)
+
+/* Invoked from the exit to user before invoking exit_to_user_mode_loop() */
static __always_inline bool
-hrtimer_rearm_deferred_user_irq(unsigned long *tif_work, const unsigned long tif_mask) { return false; }
-static __always_inline bool hrtimer_test_and_clear_rearm_deferred(void) { return false; }
+hrtimer_rearm_deferred_user_irq(unsigned long *tif_work, const unsigned long tif_mask)
+{
+ /* Help the compiler to optimize the function out for syscall returns */
+ if (!(tif_mask & _TIF_HRTIMER_REARM))
+ return false;
+ /*
+ * Rearm the timer if none of the resched flags is set before going into
+ * the loop which re-enables interrupts.
+ */
+ if (unlikely((*tif_work & TIF_REARM_MASK) == _TIF_HRTIMER_REARM)) {
+ clear_thread_flag(TIF_HRTIMER_REARM);
+ __hrtimer_rearm_deferred();
+ /* Don't go into the loop if HRTIMER_REARM was the only flag */
+ *tif_work &= ~TIF_HRTIMER_REARM;
+ return !*tif_work;
+ }
+ return false;
+}
+
+/* Invoked from the time slice extension decision function */
+static __always_inline void hrtimer_rearm_deferred_tif(unsigned long tif_work)
+{
+ if (hrtimer_test_and_clear_rearm_deferred_tif(tif_work))
+ __hrtimer_rearm_deferred();
+}
+
+/*
+ * This is to be called on all irqentry_exit() paths that will enable
+ * interrupts.
+ */
+static __always_inline void hrtimer_rearm_deferred(void)
+{
+ hrtimer_rearm_deferred_tif(read_thread_flags());
+}
+
+/*
+ * Invoked from the scheduler on entry to __schedule() so it can defer
+ * rearming after the load balancing callbacks which might change hrtick.
+ */
+static __always_inline bool hrtimer_test_and_clear_rearm_deferred(void)
+{
+ return hrtimer_test_and_clear_rearm_deferred_tif(read_thread_flags());
+}
+
#else /* CONFIG_HRTIMER_REARM_DEFERRED */
static __always_inline void __hrtimer_rearm_deferred(void) { }
static __always_inline void hrtimer_rearm_deferred(void) { }
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -60,7 +60,9 @@ config GENERIC_CMOS_UPDATE
# Deferred rearming of the hrtimer interrupt
config HRTIMER_REARM_DEFERRED
- def_bool n
+ def_bool y
+ depends on GENERIC_ENTRY && HAVE_GENERIC_TIF_BITS
+ depends on HIGH_RES_TIMERS && SCHED_HRTICK
# Select to handle posix CPU timers from task_work
# and not from the timer interrupt context
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1939,10 +1939,9 @@ static __latent_entropy void hrtimer_run
* Very similar to hrtimer_force_reprogram(), except it deals with
* deferred_rearm and hang_detected.
*/
-static void hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now)
+static void hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now,
+ ktime_t expires_next, bool deferred)
{
- ktime_t expires_next = hrtimer_update_next_event(cpu_base);
-
cpu_base->expires_next = expires_next;
cpu_base->deferred_rearm = false;
@@ -1954,9 +1953,37 @@ static void hrtimer_rearm(struct hrtimer
expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC);
cpu_base->hang_detected = false;
}
- hrtimer_rearm_event(expires_next, false);
+ hrtimer_rearm_event(expires_next, deferred);
}
+#ifdef CONFIG_HRTIMER_REARM_DEFERRED
+void __hrtimer_rearm_deferred(void)
+{
+ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
+ ktime_t now, expires_next;
+
+ if (!cpu_base->deferred_rearm)
+ return;
+
+ guard(raw_spinlock)(&cpu_base->lock);
+ now = hrtimer_update_base(cpu_base);
+ expires_next = hrtimer_update_next_event(cpu_base);
+ hrtimer_rearm(cpu_base, now, expires_next, true);
+}
+
+static __always_inline void
+hrtimer_interrupt_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now, ktime_t expires_next)
+{
+ set_thread_flag(TIF_HRTIMER_REARM);
+}
+#else /* CONFIG_HRTIMER_REARM_DEFERRED */
+static __always_inline void
+hrtimer_interrupt_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now, ktime_t expires_next)
+{
+ hrtimer_rearm(cpu_base, now, expires_next, false);
+}
+#endif /* !CONFIG_HRTIMER_REARM_DEFERRED */
+
/*
* High resolution timer interrupt
* Called with interrupts disabled
@@ -2014,9 +2041,10 @@ void hrtimer_interrupt(struct clock_even
cpu_base->hang_detected = true;
}
- hrtimer_rearm(cpu_base, now);
+ hrtimer_interrupt_rearm(cpu_base, now, expires_next);
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
}
+
#endif /* !CONFIG_HIGH_RES_TIMERS */
/* | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:18 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Most times there is no change between hrtimer_interrupt() deferring the rearm
and the invocation of hrtimer_rearm_deferred(). In those cases it's a pointless
exercise to re-evaluate the next expiring timer.
Cache the required data and use it if nothing changed.
Signed-off-by: Thomas Gleixner <tglx@kernel.org
---
include/linux/hrtimer_defs.h | 53 +++++++++++++++++++++----------------------
kernel/time/hrtimer.c | 45 +++++++++++++++++++++++++-----------
2 files changed, 58 insertions(+), 40 deletions(-)
--- a/include/linux/hrtimer_defs.h
+++ b/include/linux/hrtimer_defs.h
@@ -47,32 +47,31 @@ enum hrtimer_base_type {
/**
* struct hrtimer_cpu_base - the per cpu clock bases
- * @lock: lock protecting the base and associated clock bases
- * and timers
- * @cpu: cpu number
- * @active_bases: Bitfield to mark bases with active timers
- * @clock_was_set_seq: Sequence counter of clock was set events
- * @hres_active: State of high resolution mode
- * @deferred_rearm: A deferred rearm is pending
- * @hang_detected: The last hrtimer interrupt detected a hang
- * @softirq_activated: displays, if the softirq is raised - update of softirq
- * related settings is not required then.
- * @nr_events: Total number of hrtimer interrupt events
- * @nr_retries: Total number of hrtimer interrupt retries
- * @nr_hangs: Total number of hrtimer interrupt hangs
- * @max_hang_time: Maximum time spent in hrtimer_interrupt
- * @softirq_expiry_lock: Lock which is taken while softirq based hrtimer are
- * expired
- * @online: CPU is online from an hrtimers point of view
- * @timer_waiters: A hrtimer_cancel() invocation waits for the timer
- * callback to finish.
- * @expires_next: absolute time of the next event, is required for remote
- * hrtimer enqueue; it is the total first expiry time (hard
- * and soft hrtimer are taken into account)
- * @next_timer: Pointer to the first expiring timer
- * @softirq_expires_next: Time to check, if soft queues needs also to be expired
- * @softirq_next_timer: Pointer to the first expiring softirq based timer
- * @clock_base: array of clock bases for this cpu
+ * @lock: lock protecting the base and associated clock bases and timers
+ * @cpu: cpu number
+ * @active_bases: Bitfield to mark bases with active timers
+ * @clock_was_set_seq: Sequence counter of clock was set events
+ * @hres_active: State of high resolution mode
+ * @deferred_rearm: A deferred rearm is pending
+ * @deferred_needs_update: The deferred rearm must re-evaluate the first timer
+ * @hang_detected: The last hrtimer interrupt detected a hang
+ * @softirq_activated: displays, if the softirq is raised - update of softirq
+ * related settings is not required then.
+ * @nr_events: Total number of hrtimer interrupt events
+ * @nr_retries: Total number of hrtimer interrupt retries
+ * @nr_hangs: Total number of hrtimer interrupt hangs
+ * @max_hang_time: Maximum time spent in hrtimer_interrupt
+ * @softirq_expiry_lock: Lock which is taken while softirq based hrtimer are expired
+ * @online: CPU is online from an hrtimers point of view
+ * @timer_waiters: A hrtimer_cancel() waiters for the timer callback to finish.
+ * @expires_next: Absolute time of the next event, is required for remote
+ * hrtimer enqueue; it is the total first expiry time (hard
+ * and soft hrtimer are taken into account)
+ * @next_timer: Pointer to the first expiring timer
+ * @softirq_expires_next: Time to check, if soft queues needs also to be expired
+ * @softirq_next_timer: Pointer to the first expiring softirq based timer
+ * @deferred_expires_next: Cached expires next value for deferred rearm
+ * @clock_base: Array of clock bases for this cpu
*
* Note: next_timer is just an optimization for __remove_hrtimer().
* Do not dereference the pointer because it is not reliable on
@@ -85,6 +84,7 @@ struct hrtimer_cpu_base {
unsigned int clock_was_set_seq;
bool hres_active;
bool deferred_rearm;
+ bool deferred_needs_update;
bool hang_detected;
bool softirq_activated;
bool online;
@@ -102,6 +102,7 @@ struct hrtimer_cpu_base {
struct hrtimer *next_timer;
ktime_t softirq_expires_next;
struct hrtimer *softirq_next_timer;
+ ktime_t deferred_expires_next;
struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
call_single_data_t csd;
} ____cacheline_aligned;
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -919,8 +919,10 @@ static bool update_needs_ipi(struct hrti
return false;
/* If a deferred rearm is pending the remote CPU will take care of it */
- if (cpu_base->deferred_rearm)
+ if (cpu_base->deferred_rearm) {
+ cpu_base->deferred_needs_update = true;
return false;
+ }
/*
* Walk the affected clock bases and check whether the first expiring
@@ -1141,7 +1143,12 @@ static void __remove_hrtimer(struct hrti
* a local timer is removed to be immediately restarted. That's handled
* at the call site.
*/
- if (reprogram && timer == cpu_base->next_timer && !timer->is_lazy)
+ if (!reprogram || timer != cpu_base->next_timer || timer->is_lazy)
+ return;
+
+ if (cpu_base->deferred_rearm)
+ cpu_base->deferred_needs_update = true;
+ else
hrtimer_force_reprogram(cpu_base, /* skip_equal */ true);
}
@@ -1328,8 +1335,10 @@ static bool __hrtimer_start_range_ns(str
}
/* If a deferred rearm is pending skip reprogramming the device */
- if (cpu_base->deferred_rearm)
+ if (cpu_base->deferred_rearm) {
+ cpu_base->deferred_needs_update = true;
return false;
+ }
if (!was_first || cpu_base != this_cpu_base) {
/*
@@ -1939,8 +1948,7 @@ static __latent_entropy void hrtimer_run
* Very similar to hrtimer_force_reprogram(), except it deals with
* deferred_rearm and hang_detected.
*/
-static void hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now,
- ktime_t expires_next, bool deferred)
+static void hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t expires_next, bool deferred)
{
cpu_base->expires_next = expires_next;
cpu_base->deferred_rearm = false;
@@ -1950,7 +1958,7 @@ static void hrtimer_rearm(struct hrtimer
* Give the system a chance to do something else than looping
* on hrtimer interrupts.
*/
- expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC);
+ expires_next = ktime_add_ns(ktime_get(), 100 * NSEC_PER_MSEC);
cpu_base->hang_detected = false;
}
hrtimer_rearm_event(expires_next, deferred);
@@ -1960,27 +1968,36 @@ static void hrtimer_rearm(struct hrtimer
void __hrtimer_rearm_deferred(void)
{
struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
- ktime_t now, expires_next;
+ ktime_t expires_next;
if (!cpu_base->deferred_rearm)
return;
guard(raw_spinlock)(&cpu_base->lock);
- now = hrtimer_update_base(cpu_base);
- expires_next = hrtimer_update_next_event(cpu_base);
- hrtimer_rearm(cpu_base, now, expires_next, true);
+ if (cpu_base->deferred_needs_update) {
+ hrtimer_update_base(cpu_base);
+ expires_next = hrtimer_update_next_event(cpu_base);
+ } else {
+ /* No timer added/removed. Use the cached value */
+ expires_next = cpu_base->deferred_expires_next;
+ }
+ hrtimer_rearm(cpu_base, expires_next, true);
}
static __always_inline void
-hrtimer_interrupt_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now, ktime_t expires_next)
+hrtimer_interrupt_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t expires_next)
{
+ /* hrtimer_interrupt() just re-evaluated the first expiring timer */
+ cpu_base->deferred_needs_update = false;
+ /* Cache the expiry time */
+ cpu_base->deferred_expires_next = expires_next;
set_thread_flag(TIF_HRTIMER_REARM);
}
#else /* CONFIG_HRTIMER_REARM_DEFERRED */
static __always_inline void
-hrtimer_interrupt_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now, ktime_t expires_next)
+hrtimer_interrupt_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t expires_next)
{
- hrtimer_rearm(cpu_base, now, expires_next, false);
+ hrtimer_rearm(cpu_base, expires_next, false);
}
#endif /* !CONFIG_HRTIMER_REARM_DEFERRED */
@@ -2041,7 +2058,7 @@ void hrtimer_interrupt(struct clock_even
cpu_base->hang_detected = true;
}
- hrtimer_interrupt_rearm(cpu_base, now, expires_next);
+ hrtimer_interrupt_rearm(cpu_base, expires_next);
raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:23 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Evaluating the next expiry time of all clock bases is cache line expensive
as the expiry time of the first expiring timer is not cached in the base
and requires to access the timer itself, which is definitely in a different
cache line.
It's way more efficient to keep track of the expiry time on enqueue and
dequeue operations as the relevant data is already in the cache at that
point.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
include/linux/hrtimer_defs.h | 2 ++
kernel/time/hrtimer.c | 37 ++++++++++++++++++++++++++++++++++---
2 files changed, 36 insertions(+), 3 deletions(-)
--- a/include/linux/hrtimer_defs.h
+++ b/include/linux/hrtimer_defs.h
@@ -19,6 +19,7 @@
* timer to a base on another cpu.
* @clockid: clock id for per_cpu support
* @seq: seqcount around __run_hrtimer
+ * @expires_next: Absolute time of the next event in this clock base
* @running: pointer to the currently running hrtimer
* @active: red black tree root node for the active timers
* @offset: offset of this clock to the monotonic base
@@ -28,6 +29,7 @@ struct hrtimer_clock_base {
unsigned int index;
clockid_t clockid;
seqcount_raw_spinlock_t seq;
+ ktime_t expires_next;
struct hrtimer *running;
struct timerqueue_head active;
ktime_t offset;
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1107,7 +1107,18 @@ static bool enqueue_hrtimer(struct hrtim
/* Pairs with the lockless read in hrtimer_is_queued() */
WRITE_ONCE(timer->is_queued, HRTIMER_STATE_ENQUEUED);
- return timerqueue_add(&base->active, &timer->node);
+ if (!timerqueue_add(&base->active, &timer->node))
+ return false;
+
+ base->expires_next = hrtimer_get_expires(timer);
+ return true;
+}
+
+static inline void base_update_next_timer(struct hrtimer_clock_base *base)
+{
+ struct timerqueue_node *next = timerqueue_getnext(&base->active);
+
+ base->expires_next = next ? next->expires : KTIME_MAX;
}
/*
@@ -1122,6 +1133,7 @@ static void __remove_hrtimer(struct hrti
bool newstate, bool reprogram)
{
struct hrtimer_cpu_base *cpu_base = base->cpu_base;
+ bool was_first;
lockdep_assert_held(&cpu_base->lock);
@@ -1131,9 +1143,17 @@ static void __remove_hrtimer(struct hrti
/* Pairs with the lockless read in hrtimer_is_queued() */
WRITE_ONCE(timer->is_queued, newstate);
+ was_first = &timer->node == timerqueue_getnext(&base->active);
+
if (!timerqueue_del(&base->active, &timer->node))
cpu_base->active_bases &= ~(1 << base->index);
+ /* Nothing to update if this was not the first timer in the base */
+ if (!was_first)
+ return;
+
+ base_update_next_timer(base);
+
/*
* If reprogram is false don't update cpu_base->next_timer and do not
* touch the clock event device.
@@ -1182,9 +1202,12 @@ static inline bool
remove_and_enqueue_same_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
const enum hrtimer_mode mode, ktime_t expires, u64 delta_ns)
{
+ bool was_first = false;
+
/* Remove it from the timer queue if active */
if (timer->is_queued) {
debug_hrtimer_deactivate(timer);
+ was_first = &timer->node == timerqueue_getnext(&base->active);
timerqueue_del(&base->active, &timer->node);
}
@@ -1197,8 +1220,16 @@ remove_and_enqueue_same_base(struct hrti
/* Pairs with the lockless read in hrtimer_is_queued() */
WRITE_ONCE(timer->is_queued, HRTIMER_STATE_ENQUEUED);
- /* Returns true if this is the first expiring timer */
- return timerqueue_add(&base->active, &timer->node);
+ /* If it's the first expiring timer now or again, update base */
+ if (timerqueue_add(&base->active, &timer->node)) {
+ base->expires_next = expires;
+ return true;
+ }
+
+ if (was_first)
+ base_update_next_timer(base);
+
+ return false;
}
static inline ktime_t hrtimer_update_lowres(struct hrtimer *timer, ktime_t tim, | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:28 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The per clock base cached expiry time allows to do a more efficient
evaluation of the next expiry on a CPU.
Separate the reprogramming evaluation from the NOHZ idle evaluation which
needs to exclude the NOHZ timer to keep the reprogramming path lean and
clean.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 120 ++++++++++++++++++++++++++++----------------------
1 file changed, 69 insertions(+), 51 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -546,49 +546,67 @@ static struct hrtimer_clock_base *
#define for_each_active_base(base, cpu_base, active) \
while ((base = __next_base((cpu_base), &(active))))
-static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
- const struct hrtimer *exclude,
- unsigned int active, ktime_t expires_next)
+#if defined(CONFIG_NO_HZ_COMMON)
+/*
+ * Same as hrtimer_bases_next_event() below, but skips the excluded timer and
+ * does not update cpu_base->next_timer/expires.
+ */
+static ktime_t hrtimer_bases_next_event_without(struct hrtimer_cpu_base *cpu_base,
+ const struct hrtimer *exclude,
+ unsigned int active, ktime_t expires_next)
{
struct hrtimer_clock_base *base;
ktime_t expires;
+ lockdep_assert_held(&cpu_base->lock);
+
for_each_active_base(base, cpu_base, active) {
- struct timerqueue_node *next;
- struct hrtimer *timer;
+ expires = ktime_sub(base->expires_next, base->offset);
+ if (expires >= expires_next)
+ continue;
- next = timerqueue_getnext(&base->active);
- timer = container_of(next, struct hrtimer, node);
- if (timer == exclude) {
- /* Get to the next timer in the queue. */
- next = timerqueue_iterate_next(next);
- if (!next)
- continue;
+ /*
+ * If the excluded timer is the first on this base evaluate the
+ * next timer.
+ */
+ struct timerqueue_node *node = timerqueue_getnext(&base->active);
- timer = container_of(next, struct hrtimer, node);
+ if (unlikely(&exclude->node == node)) {
+ node = timerqueue_iterate_next(node);
+ if (!node)
+ continue;
+ expires = ktime_sub(node->expires, base->offset);
+ if (expires >= expires_next)
+ continue;
}
- expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
- if (expires < expires_next) {
- expires_next = expires;
+ expires_next = expires;
+ }
+ /* If base->offset changed, the result might be negative */
+ return max(expires_next, 0);
+}
+#endif
- /* Skip cpu_base update if a timer is being excluded. */
- if (exclude)
- continue;
+static __always_inline struct hrtimer *clock_base_next_timer(struct hrtimer_clock_base *base)
+{
+ struct timerqueue_node *next = timerqueue_getnext(&base->active);
+
+ return container_of(next, struct hrtimer, node);
+}
- if (timer->is_soft)
- cpu_base->softirq_next_timer = timer;
- else
- cpu_base->next_timer = timer;
+/* Find the base with the earliest expiry */
+static void hrtimer_bases_first(struct hrtimer_cpu_base *cpu_base,unsigned int active,
+ ktime_t *expires_next, struct hrtimer **next_timer)
+{
+ struct hrtimer_clock_base *base;
+ ktime_t expires;
+
+ for_each_active_base(base, cpu_base, active) {
+ expires = ktime_sub(base->expires_next, base->offset);
+ if (expires < *expires_next) {
+ *expires_next = expires;
+ *next_timer = clock_base_next_timer(base);
}
}
- /*
- * clock_was_set() might have changed base->offset of any of
- * the clock bases so the result might be negative. Fix it up
- * to prevent a false positive in clockevents_program_event().
- */
- if (expires_next < 0)
- expires_next = 0;
- return expires_next;
}
/*
@@ -617,19 +635,22 @@ static ktime_t __hrtimer_get_next_event(
ktime_t expires_next = KTIME_MAX;
unsigned int active;
+ lockdep_assert_held(&cpu_base->lock);
+
if (!cpu_base->softirq_activated && (active_mask & HRTIMER_ACTIVE_SOFT)) {
active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
- cpu_base->softirq_next_timer = NULL;
- expires_next = __hrtimer_next_event_base(cpu_base, NULL, active, KTIME_MAX);
- next_timer = cpu_base->softirq_next_timer;
+ if (active)
+ hrtimer_bases_first(cpu_base, active, &expires_next, &next_timer);
+ cpu_base->softirq_next_timer = next_timer;
}
if (active_mask & HRTIMER_ACTIVE_HARD) {
active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
+ if (active)
+ hrtimer_bases_first(cpu_base, active, &expires_next, &next_timer);
cpu_base->next_timer = next_timer;
- expires_next = __hrtimer_next_event_base(cpu_base, NULL, active, expires_next);
}
- return expires_next;
+ return max(expires_next, 0);
}
static ktime_t hrtimer_update_next_event(struct hrtimer_cpu_base *cpu_base)
@@ -724,11 +745,7 @@ static void __hrtimer_reprogram(struct h
hrtimer_rearm_event(expires_next, false);
}
-/*
- * Reprogram the event source with checking both queues for the
- * next event
- * Called with interrupts disabled and base->lock held
- */
+/* Reprogram the event source with a evaluation of all clock bases */
static void hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, bool skip_equal)
{
ktime_t expires_next = hrtimer_update_next_event(cpu_base);
@@ -1662,19 +1679,20 @@ u64 hrtimer_next_event_without(const str
{
struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
u64 expires = KTIME_MAX;
+ unsigned int active;
guard(raw_spinlock_irqsave)(&cpu_base->lock);
- if (hrtimer_hres_active(cpu_base)) {
- unsigned int active;
+ if (!hrtimer_hres_active(cpu_base))
+ return expires;
- if (!cpu_base->softirq_activated) {
- active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
- expires = __hrtimer_next_event_base(cpu_base, exclude, active, KTIME_MAX);
- }
- active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
- expires = __hrtimer_next_event_base(cpu_base, exclude, active, expires);
- }
- return expires;
+ active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
+ if (active && !cpu_base->softirq_activated)
+ expires = hrtimer_bases_next_event_without(cpu_base, exclude, active, KTIME_MAX);
+
+ active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
+ if (!active)
+ return expires;
+ return hrtimer_bases_next_event_without(cpu_base, exclude, active, expires);
}
#endif | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:33 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Replace the open coded container_of() orgy with a trivial
clock_base_next_timer() helper.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1933,6 +1933,13 @@ static void __run_hrtimer(struct hrtimer
base->running = NULL;
}
+static __always_inline struct hrtimer *clock_base_next_timer_safe(struct hrtimer_clock_base *base)
+{
+ struct timerqueue_node *next = timerqueue_getnext(&base->active);
+
+ return next ? container_of(next, struct hrtimer, node) : NULL;
+}
+
static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now,
unsigned long flags, unsigned int active_mask)
{
@@ -1940,16 +1947,10 @@ static void __hrtimer_run_queues(struct
struct hrtimer_clock_base *base;
for_each_active_base(base, cpu_base, active) {
- struct timerqueue_node *node;
- ktime_t basenow;
-
- basenow = ktime_add(now, base->offset);
-
- while ((node = timerqueue_getnext(&base->active))) {
- struct hrtimer *timer;
-
- timer = container_of(node, struct hrtimer, node);
+ ktime_t basenow = ktime_add(now, base->offset);
+ struct hrtimer *timer;
+ while ((timer = clock_base_next_timer(base))) {
/*
* The immediate goal for using the softexpires is
* minimizing wakeups, not running timers at the | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:37 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Give the compiler some help to emit way better code.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/time/hrtimer.c | 20 ++++----------------
1 file changed, 4 insertions(+), 16 deletions(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -529,22 +529,10 @@ static inline void debug_activate(struct
trace_hrtimer_start(timer, mode, was_armed);
}
-static struct hrtimer_clock_base *
-__next_base(struct hrtimer_cpu_base *cpu_base, unsigned int *active)
-{
- unsigned int idx;
-
- if (!*active)
- return NULL;
-
- idx = __ffs(*active);
- *active &= ~(1U << idx);
-
- return &cpu_base->clock_base[idx];
-}
-
-#define for_each_active_base(base, cpu_base, active) \
- while ((base = __next_base((cpu_base), &(active))))
+#define for_each_active_base(base, cpu_base, active) \
+ for (unsigned int idx = ffs(active); idx--; idx = ffs((active))) \
+ for (bool done = false; !done; active &= ~(1U << idx)) \
+ for (base = &cpu_base->clock_base[idx]; !done; done = true)
#if defined(CONFIG_NO_HZ_COMMON)
/* | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:42 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | Some RB tree users require quick access to the next and the previous node,
e.g. to check whether a modification of the node results in a change of the
nodes position in the tree. If the node position does not change, then the
modification can happen in place without going through a full enqueue
requeue cycle. A upcoming use case for this are the timer queues of the
hrtimer subsystem as they can optimize for timers which are frequently
rearmed while enqueued.
This can be obviously achieved with rb_next() and rb_prev(), but those
turned out to be quite expensive for hotpath operations depending on the
tree depth.
Add a linked RB tree variant where add() and erase() maintain the links
between the nodes. Like the cached variant it provides a pointer to the
left most node in the root.
It intentionally does not use a [h]list head as there is no real need for
true list operations as the list is strictly coupled to the tree and
and cannot be manipulated independently.
It sets the nodes previous pointer to NULL for the left most node and the
next pointer to NULL for the right most node. This allows a quick check
especially for the left most node without consulting the list head address,
which creates better code.
Aside of the rb_leftmost cached pointer this could trivially provide a
rb_rightmost pointer as well, but there is no usage for that (yet).
Signed-off-by: Thomas Gleixner <tglx@kernel.org
Cc: Eric Dumazet <edumazet@google.com>
---
include/linux/rbtree.h | 81 ++++++++++++++++++++++++++++++++++++++-----
include/linux/rbtree_types.h | 16 ++++++++
lib/rbtree.c | 17 +++++++++
3 files changed, 105 insertions(+), 9 deletions(-)
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -35,10 +35,15 @@
#define RB_CLEAR_NODE(node) \
((node)->__rb_parent_color = (unsigned long)(node))
+#define RB_EMPTY_LINKED_NODE(lnode) RB_EMPTY_NODE(&(lnode)->node)
+#define RB_CLEAR_LINKED_NODE(lnode) ({ \
+ RB_CLEAR_NODE(&(lnode)->node); \
+ (lnode)->prev = (lnode)->next = NULL; \
+})
extern void rb_insert_color(struct rb_node *, struct rb_root *);
extern void rb_erase(struct rb_node *, struct rb_root *);
-
+extern bool rb_erase_linked(struct rb_node_linked *, struct rb_root_linked *);
/* Find logical next and previous nodes in a tree */
extern struct rb_node *rb_next(const struct rb_node *);
@@ -213,15 +218,10 @@ rb_add_cached(struct rb_node *node, stru
return leftmost ? node : NULL;
}
-/**
- * rb_add() - insert @node into @tree
- * @node: node to insert
- * @tree: tree to insert @node into
- * @less: operator defining the (partial) node order
- */
static __always_inline void
-rb_add(struct rb_node *node, struct rb_root *tree,
- bool (*less)(struct rb_node *, const struct rb_node *))
+__rb_add(struct rb_node *node, struct rb_root *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *),
+ void (*linkop)(struct rb_node *, struct rb_node *, struct rb_node **))
{
struct rb_node **link = &tree->rb_node;
struct rb_node *parent = NULL;
@@ -234,10 +234,73 @@ rb_add(struct rb_node *node, struct rb_r
link = &parent->rb_right;
}
+ linkop(node, parent, link);
rb_link_node(node, parent, link);
rb_insert_color(node, tree);
}
+#define __node_2_linked_node(_n) \
+ rb_entry((_n), struct rb_node_linked, node)
+
+static inline void
+rb_link_linked_node(struct rb_node *node, struct rb_node *parent, struct rb_node **link)
+{
+ if (!parent)
+ return;
+
+ struct rb_node_linked *nnew = __node_2_linked_node(node);
+ struct rb_node_linked *npar = __node_2_linked_node(parent);
+
+ if (link == &parent->rb_left) {
+ nnew->prev = npar->prev;
+ nnew->next = npar;
+ npar->prev = nnew;
+ if (nnew->prev)
+ nnew->prev->next = nnew;
+ } else {
+ nnew->next = npar->next;
+ nnew->prev = npar;
+ npar->next = nnew;
+ if (nnew->next)
+ nnew->next->prev = nnew;
+ }
+}
+
+/**
+ * rb_add_linked() - insert @node into the leftmost linked tree @tree
+ * @node: node to insert
+ * @tree: linked tree to insert @node into
+ * @less: operator defining the (partial) node order
+ *
+ * Returns @true when @node is the new leftmost, @false otherwise.
+ */
+static __always_inline bool
+rb_add_linked(struct rb_node_linked *node, struct rb_root_linked *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *))
+{
+ __rb_add(&node->node, &tree->rb_root, less, rb_link_linked_node);
+ if (!node->prev)
+ tree->rb_leftmost = node;
+ return !node->prev;
+}
+
+/* Empty linkop function which is optimized away by the compiler */
+static __always_inline void
+rb_link_noop(struct rb_node *n, struct rb_node *p, struct rb_node **l) { }
+
+/**
+ * rb_add() - insert @node into @tree
+ * @node: node to insert
+ * @tree: tree to insert @node into
+ * @less: operator defining the (partial) node order
+ */
+static __always_inline void
+rb_add(struct rb_node *node, struct rb_root *tree,
+ bool (*less)(struct rb_node *, const struct rb_node *))
+{
+ __rb_add(node, tree, less, rb_link_noop);
+}
+
/**
* rb_find_add_cached() - find equivalent @node in @tree, or add @node
* @node: node to look-for / insert
--- a/include/linux/rbtree_types.h
+++ b/include/linux/rbtree_types.h
@@ -9,6 +9,12 @@ struct rb_node {
} __attribute__((aligned(sizeof(long))));
/* The alignment might seem pointless, but allegedly CRIS needs it */
+struct rb_node_linked {
+ struct rb_node node;
+ struct rb_node_linked *prev;
+ struct rb_node_linked *next;
+};
+
struct rb_root {
struct rb_node *rb_node;
};
@@ -28,7 +34,17 @@ struct rb_root_cached {
struct rb_node *rb_leftmost;
};
+/*
+ * Leftmost tree with links. This would allow a trivial rb_rightmost update,
+ * but that has been omitted due to the lack of users.
+ */
+struct rb_root_linked {
+ struct rb_root rb_root;
+ struct rb_node_linked *rb_leftmost;
+};
+
#define RB_ROOT (struct rb_root) { NULL, }
#define RB_ROOT_CACHED (struct rb_root_cached) { {NULL, }, NULL }
+#define RB_ROOT_LINKED (struct rb_root_linked) { {NULL, }, NULL }
#endif
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -446,6 +446,23 @@ void rb_erase(struct rb_node *node, stru
}
EXPORT_SYMBOL(rb_erase);
+bool rb_erase_linked(struct rb_node_linked *node, struct rb_root_linked *root)
+{
+ if (node->prev)
+ node->prev->next = node->next;
+ else
+ root->rb_leftmost = node->next;
+
+ if (node->next)
+ node->next->prev = node->prev;
+
+ rb_erase(&node->node, &root->rb_root);
+ RB_CLEAR_LINKED_NODE(node);
+
+ return !!root->rb_leftmost;
+}
+EXPORT_SYMBOL_GPL(rb_erase_linked);
+
/*
* Augmented rbtree manipulation functions.
* | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:47 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | The hrtimer subsystem wants to peak ahead to the next and previous timer to
evaluated whether a to be rearmed timer can stay at the same position in
the RB tree with the new expiry time.
The linked RB tree provides the infrastructure for this as it maintains
links to the previous and next nodes for each entry in the tree.
Provide timerqueue wrappers around that.
Signed-off-by: Thomas Gleixner <tglx@kernel.org
---
include/linux/timerqueue.h | 56 +++++++++++++++++++++++++++++++++------
include/linux/timerqueue_types.h | 15 ++++++++--
lib/timerqueue.c | 14 +++++++++
3 files changed, 74 insertions(+), 11 deletions(-)
--- a/include/linux/timerqueue.h
+++ b/include/linux/timerqueue.h
@@ -5,12 +5,11 @@
#include <linux/rbtree.h>
#include <linux/timerqueue_types.h>
-extern bool timerqueue_add(struct timerqueue_head *head,
- struct timerqueue_node *node);
-extern bool timerqueue_del(struct timerqueue_head *head,
- struct timerqueue_node *node);
-extern struct timerqueue_node *timerqueue_iterate_next(
- struct timerqueue_node *node);
+bool timerqueue_add(struct timerqueue_head *head, struct timerqueue_node *node);
+bool timerqueue_del(struct timerqueue_head *head, struct timerqueue_node *node);
+struct timerqueue_node *timerqueue_iterate_next(struct timerqueue_node *node);
+
+bool timerqueue_linked_add(struct timerqueue_linked_head *head, struct timerqueue_linked_node *node);
/**
* timerqueue_getnext - Returns the timer with the earliest expiration time
@@ -19,8 +18,7 @@ extern struct timerqueue_node *timerqueu
*
* Returns a pointer to the timer node that has the earliest expiration time.
*/
-static inline
-struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head)
+static inline struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head)
{
struct rb_node *leftmost = rb_first_cached(&head->rb_root);
@@ -41,4 +39,46 @@ static inline void timerqueue_init_head(
{
head->rb_root = RB_ROOT_CACHED;
}
+
+/* Timer queues with linked nodes */
+
+static __always_inline
+struct timerqueue_linked_node *timerqueue_linked_first(struct timerqueue_linked_head *head)
+{
+ return rb_entry_safe(head->rb_root.rb_leftmost, struct timerqueue_linked_node, node);
+}
+
+static __always_inline
+struct timerqueue_linked_node *timerqueue_linked_next(struct timerqueue_linked_node *node)
+{
+ return rb_entry_safe(node->node.next, struct timerqueue_linked_node, node);
+}
+
+static __always_inline
+struct timerqueue_linked_node *timerqueue_linked_prev(struct timerqueue_linked_node *node)
+{
+ return rb_entry_safe(node->node.prev, struct timerqueue_linked_node, node);
+}
+
+static __always_inline
+bool timerqueue_linked_del(struct timerqueue_linked_head *head, struct timerqueue_linked_node *node)
+{
+ return rb_erase_linked(&node->node, &head->rb_root);
+}
+
+static __always_inline void timerqueue_linked_init(struct timerqueue_linked_node *node)
+{
+ RB_CLEAR_LINKED_NODE(&node->node);
+}
+
+static __always_inline bool timerqueue_linked_node_queued(struct timerqueue_linked_node *node)
+{
+ return !RB_EMPTY_LINKED_NODE(&node->node);
+}
+
+static __always_inline void timerqueue_linked_init_head(struct timerqueue_linked_head *head)
+{
+ head->rb_root = RB_ROOT_LINKED;
+}
+
#endif /* _LINUX_TIMERQUEUE_H */
--- a/include/linux/timerqueue_types.h
+++ b/include/linux/timerqueue_types.h
@@ -6,12 +6,21 @@
#include <linux/types.h>
struct timerqueue_node {
- struct rb_node node;
- ktime_t expires;
+ struct rb_node node;
+ ktime_t expires;
};
struct timerqueue_head {
- struct rb_root_cached rb_root;
+ struct rb_root_cached rb_root;
+};
+
+struct timerqueue_linked_node {
+ struct rb_node_linked node;
+ ktime_t expires;
+};
+
+struct timerqueue_linked_head {
+ struct rb_root_linked rb_root;
};
#endif /* _LINUX_TIMERQUEUE_TYPES_H */
--- a/lib/timerqueue.c
+++ b/lib/timerqueue.c
@@ -82,3 +82,17 @@ struct timerqueue_node *timerqueue_itera
return container_of(next, struct timerqueue_node, node);
}
EXPORT_SYMBOL_GPL(timerqueue_iterate_next);
+
+#define __node_2_tq_linked(_n) \
+ container_of(rb_entry((_n), struct rb_node_linked, node), struct timerqueue_linked_node, node)
+
+static __always_inline bool __tq_linked_less(struct rb_node *a, const struct rb_node *b)
+{
+ return __node_2_tq_linked(a)->expires < __node_2_tq_linked(b)->expires;
+}
+
+bool timerqueue_linked_add(struct timerqueue_linked_head *head, struct timerqueue_linked_node *node)
+{
+ return rb_add_linked(&node->node, &head->rb_root, __tq_linked_less);
+}
+EXPORT_SYMBOL_GPL(timerqueue_linked_add); | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:52 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | To prepare for optimizing the rearming of enqueued timers, switch to the
linked timerqueue. That allows to check whether the new expiry time changes
the position of the timer in the RB tree or not, by checking the new expiry
time against the previous and the next timers expiry.
Signed-off-by: Thomas Gleixner <tglx@kernel.org
---
include/linux/hrtimer_defs.h | 16 ++++++++--------
include/linux/hrtimer_types.h | 8 ++++----
kernel/time/hrtimer.c | 34 +++++++++++++++++-----------------
kernel/time/timer_list.c | 10 ++++------
4 files changed, 33 insertions(+), 35 deletions(-)
--- a/include/linux/hrtimer_defs.h
+++ b/include/linux/hrtimer_defs.h
@@ -25,14 +25,14 @@
* @offset: offset of this clock to the monotonic base
*/
struct hrtimer_clock_base {
- struct hrtimer_cpu_base *cpu_base;
- unsigned int index;
- clockid_t clockid;
- seqcount_raw_spinlock_t seq;
- ktime_t expires_next;
- struct hrtimer *running;
- struct timerqueue_head active;
- ktime_t offset;
+ struct hrtimer_cpu_base *cpu_base;
+ unsigned int index;
+ clockid_t clockid;
+ seqcount_raw_spinlock_t seq;
+ ktime_t expires_next;
+ struct hrtimer *running;
+ struct timerqueue_linked_head active;
+ ktime_t offset;
} __hrtimer_clock_base_align;
enum hrtimer_base_type {
--- a/include/linux/hrtimer_types.h
+++ b/include/linux/hrtimer_types.h
@@ -17,7 +17,7 @@ enum hrtimer_restart {
/**
* struct hrtimer - the basic hrtimer structure
- * @node: timerqueue node, which also manages node.expires,
+ * @node: Linked timerqueue node, which also manages node.expires,
* the absolute expiry time in the hrtimers internal
* representation. The time is related to the clock on
* which the timer is based. Is setup by adding
@@ -39,15 +39,15 @@ enum hrtimer_restart {
* The hrtimer structure must be initialized by hrtimer_setup()
*/
struct hrtimer {
- struct timerqueue_node node;
- ktime_t _softexpires;
- enum hrtimer_restart (*__private function)(struct hrtimer *);
+ struct timerqueue_linked_node node;
struct hrtimer_clock_base *base;
bool is_queued;
bool is_rel;
bool is_soft;
bool is_hard;
bool is_lazy;
+ ktime_t _softexpires;
+ enum hrtimer_restart (*__private function)(struct hrtimer *);
};
#endif /* _LINUX_HRTIMER_TYPES_H */
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -557,10 +557,10 @@ static ktime_t hrtimer_bases_next_event_
* If the excluded timer is the first on this base evaluate the
* next timer.
*/
- struct timerqueue_node *node = timerqueue_getnext(&base->active);
+ struct timerqueue_linked_node *node = timerqueue_linked_first(&base->active);
if (unlikely(&exclude->node == node)) {
- node = timerqueue_iterate_next(node);
+ node = timerqueue_linked_next(node);
if (!node)
continue;
expires = ktime_sub(node->expires, base->offset);
@@ -576,7 +576,7 @@ static ktime_t hrtimer_bases_next_event_
static __always_inline struct hrtimer *clock_base_next_timer(struct hrtimer_clock_base *base)
{
- struct timerqueue_node *next = timerqueue_getnext(&base->active);
+ struct timerqueue_linked_node *next = timerqueue_linked_first(&base->active);
return container_of(next, struct hrtimer, node);
}
@@ -938,9 +938,9 @@ static bool update_needs_ipi(struct hrti
active &= cpu_base->active_bases;
for_each_active_base(base, cpu_base, active) {
- struct timerqueue_node *next;
+ struct timerqueue_linked_node *next;
- next = timerqueue_getnext(&base->active);
+ next = timerqueue_linked_first(&base->active);
expires = ktime_sub(next->expires, base->offset);
if (expires < cpu_base->expires_next)
return true;
@@ -1112,7 +1112,7 @@ static bool enqueue_hrtimer(struct hrtim
/* Pairs with the lockless read in hrtimer_is_queued() */
WRITE_ONCE(timer->is_queued, HRTIMER_STATE_ENQUEUED);
- if (!timerqueue_add(&base->active, &timer->node))
+ if (!timerqueue_linked_add(&base->active, &timer->node))
return false;
base->expires_next = hrtimer_get_expires(timer);
@@ -1121,7 +1121,7 @@ static bool enqueue_hrtimer(struct hrtim
static inline void base_update_next_timer(struct hrtimer_clock_base *base)
{
- struct timerqueue_node *next = timerqueue_getnext(&base->active);
+ struct timerqueue_linked_node *next = timerqueue_linked_first(&base->active);
base->expires_next = next ? next->expires : KTIME_MAX;
}
@@ -1148,9 +1148,9 @@ static void __remove_hrtimer(struct hrti
/* Pairs with the lockless read in hrtimer_is_queued() */
WRITE_ONCE(timer->is_queued, newstate);
- was_first = &timer->node == timerqueue_getnext(&base->active);
+ was_first = !timerqueue_linked_prev(&timer->node);
- if (!timerqueue_del(&base->active, &timer->node))
+ if (!timerqueue_linked_del(&base->active, &timer->node))
cpu_base->active_bases &= ~(1 << base->index);
/* Nothing to update if this was not the first timer in the base */
@@ -1212,8 +1212,8 @@ remove_and_enqueue_same_base(struct hrti
/* Remove it from the timer queue if active */
if (timer->is_queued) {
debug_hrtimer_deactivate(timer);
- was_first = &timer->node == timerqueue_getnext(&base->active);
- timerqueue_del(&base->active, &timer->node);
+ was_first = !timerqueue_linked_prev(&timer->node);
+ timerqueue_linked_del(&base->active, &timer->node);
}
/* Set the new expiry time */
@@ -1226,7 +1226,7 @@ remove_and_enqueue_same_base(struct hrti
WRITE_ONCE(timer->is_queued, HRTIMER_STATE_ENQUEUED);
/* If it's the first expiring timer now or again, update base */
- if (timerqueue_add(&base->active, &timer->node)) {
+ if (timerqueue_linked_add(&base->active, &timer->node)) {
base->expires_next = expires;
return true;
}
@@ -1758,7 +1758,7 @@ static void __hrtimer_setup(struct hrtim
timer->is_hard = !!(mode & HRTIMER_MODE_HARD);
timer->is_lazy = !!(mode & HRTIMER_MODE_LAZY_REARM);
timer->base = &cpu_base->clock_base[base];
- timerqueue_init(&timer->node);
+ timerqueue_linked_init(&timer->node);
if (WARN_ON_ONCE(!fn))
ACCESS_PRIVATE(timer, function) = hrtimer_dummy_timeout;
@@ -1923,7 +1923,7 @@ static void __run_hrtimer(struct hrtimer
static __always_inline struct hrtimer *clock_base_next_timer_safe(struct hrtimer_clock_base *base)
{
- struct timerqueue_node *next = timerqueue_getnext(&base->active);
+ struct timerqueue_linked_node *next = timerqueue_linked_first(&base->active);
return next ? container_of(next, struct hrtimer, node) : NULL;
}
@@ -2369,7 +2369,7 @@ int hrtimers_prepare_cpu(unsigned int cp
clock_b->cpu_base = cpu_base;
seqcount_raw_spinlock_init(&clock_b->seq, &cpu_base->lock);
- timerqueue_init_head(&clock_b->active);
+ timerqueue_linked_init_head(&clock_b->active);
}
cpu_base->cpu = cpu;
@@ -2399,10 +2399,10 @@ int hrtimers_cpu_starting(unsigned int c
static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
struct hrtimer_clock_base *new_base)
{
- struct timerqueue_node *node;
+ struct timerqueue_linked_node *node;
struct hrtimer *timer;
- while ((node = timerqueue_getnext(&old_base->active))) {
+ while ((node = timerqueue_linked_first(&old_base->active))) {
timer = container_of(node, struct hrtimer, node);
BUG_ON(hrtimer_callback_running(timer));
debug_hrtimer_deactivate(timer);
--- a/kernel/time/timer_list.c
+++ b/kernel/time/timer_list.c
@@ -56,13 +56,11 @@ print_timer(struct seq_file *m, struct h
(long long)(ktime_to_ns(hrtimer_get_expires(timer)) - now));
}
-static void
-print_active_timers(struct seq_file *m, struct hrtimer_clock_base *base,
- u64 now)
+static void print_active_timers(struct seq_file *m, struct hrtimer_clock_base *base, u64 now)
{
+ struct timerqueue_linked_node *curr;
struct hrtimer *timer, tmp;
unsigned long next = 0, i;
- struct timerqueue_node *curr;
unsigned long flags;
next_one:
@@ -72,13 +70,13 @@ print_active_timers(struct seq_file *m,
raw_spin_lock_irqsave(&base->cpu_base->lock, flags);
- curr = timerqueue_getnext(&base->active);
+ curr = timerqueue_linked_first(&base->active);
/*
* Crude but we have to do this O(N*N) thing, because
* we have to unlock the base when printing:
*/
while (curr && i < next) {
- curr = timerqueue_iterate_next(curr);
+ curr = timerqueue_linked_next(curr);
i++;
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:38:57 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | When modifying the expiry of a armed timer it is first dequeued, then the
expiry value is updated and then it is queued again.
This can be avoided when the new expiry value is within the range of the
previous and the next timer as that does not change the position in the RB
tree.
The linked timerqueue allows to peak ahead to the neighbours and check
whether the new expiry time is within the range of the previous and next
timer. If so just modify the timer in place and spare the enqueue and
requeue effort, which might end up rotating the RB tree twice for nothing.
This speeds up the handling of frequently rearmed hrtimers, like the hrtick
scheduler timer significantly.
Signed-off-by: Thomas Gleixner <tglx@kernel.org
---
kernel/time/hrtimer.c | 37 ++++++++++++++++++++++++++++++++++++-
1 file changed, 36 insertions(+), 1 deletion(-)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1203,6 +1203,31 @@ static inline bool remove_hrtimer(struct
return false;
}
+/*
+ * Update in place has to retrieve the expiry times of the neighbour nodes
+ * if they exist. That is cache line neutral because the dequeue/enqueue
+ * operation is going to need the same cache lines. But there is a big win
+ * when the dequeue/enqueue can be avoided because the RB tree does not
+ * have to be rebalanced twice.
+ */
+static inline bool
+hrtimer_can_update_in_place(struct hrtimer *timer, struct hrtimer_clock_base *base, ktime_t expires)
+{
+ struct timerqueue_linked_node *next = timerqueue_linked_next(&timer->node);
+ struct timerqueue_linked_node *prev = timerqueue_linked_prev(&timer->node);
+
+ /* If the new expiry goes behind the next timer, requeue is required */
+ if (next && expires > next->expires)
+ return false;
+
+ /* If this is the first timer, update in place */
+ if (!prev)
+ return true;
+
+ /* Update in place when it does not go ahead of the previous one */
+ return expires >= prev->expires;
+}
+
static inline bool
remove_and_enqueue_same_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
const enum hrtimer_mode mode, ktime_t expires, u64 delta_ns)
@@ -1211,8 +1236,18 @@ remove_and_enqueue_same_base(struct hrti
/* Remove it from the timer queue if active */
if (timer->is_queued) {
- debug_hrtimer_deactivate(timer);
was_first = !timerqueue_linked_prev(&timer->node);
+
+ /* Try to update in place to avoid the de/enqueue dance */
+ if (hrtimer_can_update_in_place(timer, base, expires)) {
+ hrtimer_set_expires_range_ns(timer, expires, delta_ns);
+ trace_hrtimer_start(timer, mode, true);
+ if (was_first)
+ base->expires_next = expires;
+ return was_first;
+ }
+
+ debug_hrtimer_deactivate(timer);
timerqueue_linked_del(&base->active, &timer->node);
} | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:39:02 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | From: Peter Zijlstra <peterz@infradead.org>
The deferred rearm of the clock event device after an interrupt and and
other hrtimer optimizations allow now to enable HRTICK for generic entry
architectures.
This decouples preemption from CONFIG_HZ, leaving only the periodic
load-balancer and various accounting things relying on the tick.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
kernel/sched/features.h | 5 +++++
1 file changed, 5 insertions(+)
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -63,8 +63,13 @@ SCHED_FEAT(DELAY_ZERO, true)
*/
SCHED_FEAT(WAKEUP_PREEMPTION, true)
+#ifdef CONFIG_HRTIMER_REARM_DEFERRED
+SCHED_FEAT(HRTICK, true)
+SCHED_FEAT(HRTICK_DL, true)
+#else
SCHED_FEAT(HRTICK, false)
SCHED_FEAT(HRTICK_DL, false)
+#endif
/*
* Decrement CPU capacity based on time not spent running tasks | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Tue, 24 Feb 2026 17:39:08 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | On Tue, Feb 24, 2026 at 05:35:12PM +0100, Thomas Gleixner wrote:
If you'd have added the shortlog, you'd have made nearly 350 lines :-)
Peter Zijlstra (11):
sched/eevdf: Fix HRTICK duration
hrtimer: Avoid pointless reprogramming in __hrtimer_start_range_ns()
hrtimer: Provide LAZY_REARM mode
sched/hrtick: Mark hrtick timer LAZY_REARM
hrtimer: Re-arrange hrtimer_interrupt()
hrtimer: Prepare stubs for deferred rearming
entry: Prepare for deferred hrtimer rearming
softirq: Prepare for deferred hrtimer rearming
sched/core: Prepare for deferred hrtimer rearming
hrtimer: Push reprogramming timers into the interrupt return path
sched: Default enable HRTICK when deferred rearming is enabled
Peter Zijlstra (Intel) (2):
sched/fair: Simplify hrtick_update()
sched/fair: Make hrtick resched hard
Thomas Gleixner (35):
sched: Avoid ktime_get() indirection
hrtimer: Provide a static branch based hrtimer_hres_enabled()
sched: Use hrtimer_highres_enabled()
sched: Optimize hrtimer handling
sched/hrtick: Avoid tiny hrtick rearms
tick/sched: Avoid hrtimer_cancel/start() sequence
clockevents: Remove redundant CLOCK_EVT_FEAT_KTIME
timekeeping: Allow inlining clocksource::read()
x86: Inline TSC reads in timekeeping
x86/apic: Remove pointless fence in lapic_next_deadline()
x86/apic: Avoid the PVOPS indirection for the TSC deadline timer
timekeeping: Provide infrastructure for coupled clockevents
clockevents: Provide support for clocksource coupled comparators
x86/apic: Enable TSC coupled programming mode
hrtimer: Add debug object init assertion
hrtimer: Reduce trace noise in hrtimer_start()
hrtimer: Use guards where appropriate
hrtimer: Cleanup coding style and comments
hrtimer: Evaluate timer expiry only once
hrtimer: Replace the bitfield in hrtimer_cpu_base
hrtimer: Convert state and properties to boolean
hrtimer: Optimize for local timers
hrtimer: Use NOHZ information for locality
hrtimer: Separate remove/enqueue handling for local timers
hrtimer: Add hrtimer_rearm tracepoint
hrtimer: Rename hrtimer_cpu_base::in_hrtirq to deferred_rearm
hrtimer: Avoid re-evaluation when nothing changed
hrtimer: Keep track of first expiring timer per clock base
hrtimer: Rework next event evaluation
hrtimer: Simplify run_hrtimer_queues()
hrtimer: Optimize for_each_active_base()
rbtree: Provide rbtree with links
timerqueue: Provide linked timerqueue
hrtimer: Use linked timerqueue
hrtimer: Try to modify timers in place
Anyway, since I've been staring at these patches for over a week now:
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
You want me to go queue them in tip/sched/hrtick, tip/timer/hrick and
then merge both into tip/sched/core and have tip/timer/core only include
tip/timer/hrtick or something? | {
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Wed, 25 Feb 2026 16:25:00 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | On Wed, Feb 25 2026 at 16:25, Peter Zijlstra wrote:
I"d like to split them up and only pull the minimal stuff into the
subsystem branches. I made a plan already, but I can't find the notes
right now. I'll dig them out later.
Thanks,
tglx | {
"author": "Thomas Gleixner <tglx@kernel.org>",
"date": "Wed, 25 Feb 2026 17:02:27 +0100",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | Peter recently posted a series tweaking the hrtimer subsystem to reduce the
overhead of the scheduler hrtick timer so it can be enabled by default:
https://lore.kernel.org/20260121162010.647043073@infradead.org
That turned out to be incomplete and led to a deeper investigation of the
related bits and pieces.
The problem is that the hrtick deadline changes on every context switch and
is also modified by wakeups and balancing. On a hackbench run this results
in about 2500 clockevent reprogramming cycles per second, which is
especially hurtful in a VM as accessing the clockevent device implies a
VM-Exit.
The following series addresses various aspects of the overall related
problem space:
1) Scheduler
Aside of some trivial fixes the handling of the hrtick timer in
the scheduler is suboptimal:
- schedule() modifies the hrtick when picking the next task
- schedule() can modify the hrtick when the balance callback runs
before releasing rq:lock
- the expiry time is unfiltered and can result in really tiny
changes of the expiry time, which are functionally completely
irrelevant
Solve this by deferring the hrtick update to the end of schedule()
and filtering out tiny changes.
2) Clocksource, clockevents, timekeeping
- Reading the current clocksource involves an indirect call, which
is expensive especially for clocksources where the actual read is
a single instruction like the TSC read on x86.
This could be solved with a static call, but the architecture
coverage for static calls is meager and that still has the
overhead of a function call and in the worst case a return
speculation mitigation.
As x86 and other architectures like S390 have one preferred
clocksource which is normally used on all contemporary systems,
this begs for a fully inlined solution.
This is achieved by a config option which tells the core code to
use the architecture provided inline guarded by a static branch.
If the branch is disabled, the indirect function call is used as
before. If enabled the inlined read is utilized.
The branch is disabled by default and only enabled after a
clocksource is installed which has the INLINE feature flag
set. When the clocksource is replaced the branch is disabled
before the clocksource change happens.
- Programming clock events is based on calculating a relative
expiry time, converting it to the clock cycles corresponding to
the clockevent device frequency and invoking the set_next_event()
callback of the clockevent device.
That works perfectly fine as most hardware timers are count down
implementations which require a relative time for programming.
But clockevent devices which are coupled to the clocksource and
provide a less than equal comparator suffer from this scheme. The
core calculates the relative expiry time based on a clock read
and the set_next_event() callback has to read the same clock
again to convert it back to a absolute time which can be
programmed into the comparator.
The other issue is that the conversion factor of the clockevent
device is calculated at boot time and does not take the NTP/PTP
adjustments of the clocksource frequency into account. Depending
on the direction of the adjustment this can cause timers to fire
early or late. Early is the more problematic case as the timer
interrupt has to reprogram the device with a very short delta as
it can't expire timers early.
This can be optimized by introducing a 'coupled' mode for the
clocksource and the clockevent device.
A) If the clocksource indicates support for 'coupled' mode, the
timekeeping core calculates a (NTP adjusted) reverse
conversion factor from the clocksource to nanoseconds
conversion. This takes NTP adjustments into account and
keeps the conversion in sync.
B) The timekeeping core provides a function to convert an
absolute CLOCK_MONOTONIC expiry time into a absolute time in
clocksource cycles which can be programmed directly into the
comparator without reading the clocksource at all.
This is possible because timekeeping keeps a time pair of
the base cycle count and the corresponding CLOCK_MONOTONIC base
time at the last update of the timekeeper.
So the absolute cycle time can be calculated by calculating
the relative time to the CLOCK_MONOTONIC base time,
converting the delta into cycles with the help of #A and
adding the base cycle count. Pure math, no hardware access.
C) The clockevent reprogramming code invokes this conversion
function when the clockevent device indicates 'coupled'
mode. The function returns false when the corresponding
clocksource is not the current system clocksource (based on
a clocksource ID check) and true if the clocksource matches
and the conversion is successful.
If false, the regular relative set_next_event() mechanism is
used, otherwise a new set_next_coupled() callback which
takes the calculated absolute expiry time as argument.
Similar to the clocksource, this new callback can optionally
be inlined.
3) hrtimers
It turned out that the hrtimer code needed a long overdue spring
cleaning independent of the problem at hand. That was conducted
before tackling the actual performance issues:
- Timer locality
The handling of timer locality is suboptimal and results often in
pointless invocations of switch_hrtimer_base() which end up
keeping the CPU base unchanged.
Aside of the pointless overhead, this prevents further
optimizations for the common local case.
Address this by improving the decision logic for keeping the clock
base local and splitting out the (re)arm handling into a unified
operation.
- Evalutation of the clock base expiries
The clock bases (MONOTONIC, REALTIME, BOOT, TAI) cache the first
expiring timer, but not the corresponding expiry time, which means
a re-evaluation of the clock bases for the next expiring timer on
the CPU requires to touch up to for extra cache lines.
Trivial to solve by caching the earliest expiry time in the clock
base itself.
- Reprogramming of the clock event device
The hrtimer interrupt already deferres reprogramming until the
interrupt handler completes, but in case of the hrtick timer
that's not sufficient because the hrtick timer callback only sets
the NEED_RESCHED flag but has no information about the next hrtick
timer expiry time, which can only be determined in the scheduler.
Expand the deferred reprogramming so it can ideally be handled in
the subsequent schedule() after the new hrtick value has been
established. If there is no schedule, soft interrupts have to be
processed on return from interrupt or a nested interrupt hits
before reaching schedule, the deferred programming is handled in
those contexts.
- Modification of queued timers
If a timer is already queued modifying the expiry time requires
dequeueing from the RB tree and requeuing after the new expiry
value has been updated. It turned out that the hrtick timer
modification end up very often at the same spot in the RB tree as
they have been before, which means the dequeue/enqueue cycle along
with the related rebalancing could have been avoided. The timer
wheel timers have a similar mechanism by checking upfront whether
the resulting expiry time keeps them in the same hash bucket.
It was tried to check this by using rb_prev() and rb_next() to
evaluate whether the modification keeps the timer in the same
spot, but that turned out to be really inefficent.
Solve this by providing a RB tree variant which extends the node
with links to the previous and next nodes, which is established
when the node is linked into the tree or adjusted when it is
removed. These links allow a quick peek into the previous and next
expiry time and if the new expiry stays in the boundary the whole
RB tree operation can be avoided.
This also simplifies the caching and update of the leftmost node
as on remove the rb_next() walk can be completely avoided. It
would obviously provide a cached rightmost pointer too, but there
is not use case for that (yet).
On a hackbench run this results in about 35% of the updates being
handled that way, which cuts the execution time of
hrtimer_start_range_ns() down to 50ns on a 2GHz machine.
- Cancellation of queued timers
Cancelling a timer or moving its expiry time past the programmed
time can result in reprogramming the clock event device.
Especially with frequent modifications of a queued timer this
results in substantial overhead especially in VMs.
Provide an option for hrtimers to tell the core to handle
reprogramming lazy in those cases, which means it trades frequent
reprogramming against an occasional pointless hrtimer interrupt.
But it turned out for the hrtick timer this is a reasonable
tradeoff. It's especially valuable when transitioning to idle,
where the timer has to be cancelled but then the NOHZ idle code
will reprogram it in case of a long idle sleep anyway. But also in
high frequency scheduling scenarios this turned out to be
beneficial.
With all the above modifications in place enabling hrtick does not longer
result in regressions compared to the hrtick disabled mode.
The reprogramming frequency of the clockevent device got down from
~2500/sec to ~100/sec for a hackbench run with a spurious hrtimer interrupt
ratio of about 25%.
What's interesting is the astonishing improvement of a hackbench run with
the following command line parameters: '-l$LOOPS -p -s8'. That uses pipes
with a message size of 8 bytes. On a 112 CPU SKL machine this results in:
NO HRTICK[_DL] HRTICK[_DL]
runtime: 0.840s 0.481s ~-42%
With other message sizes up to 256, HRTICK still results in improvements,
but not in that magnitude. Haven't investigated the cause of that yet.
While quite some parts of the series are independent enhancements, I've
decided to keep them together in one big pile for now as all of the
components are required to actually achieve the overall goal.
The patches have been already structured in a way that they can be
distributed to different subsystem branches without causing major cross
subsystem contamination or merge conflict headaches.
The series applies on v7.0-rc1 and is also available from git:
git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git sched/hrtick
Thanks,
tglx
---
arch/x86/Kconfig | 2
arch/x86/include/asm/clock_inlined.h | 22
arch/x86/kernel/apic/apic.c | 41 -
arch/x86/kernel/tsc.c | 4
include/asm-generic/thread_info_tif.h | 5
include/linux/clockchips.h | 8
include/linux/clocksource.h | 3
include/linux/hrtimer.h | 59 -
include/linux/hrtimer_defs.h | 79 +-
include/linux/hrtimer_rearm.h | 83 ++
include/linux/hrtimer_types.h | 19
include/linux/irq-entry-common.h | 25
include/linux/rbtree.h | 81 ++
include/linux/rbtree_types.h | 16
include/linux/rseq_entry.h | 14
include/linux/timekeeper_internal.h | 8
include/linux/timerqueue.h | 56 +
include/linux/timerqueue_types.h | 15
include/trace/events/timer.h | 35 -
kernel/entry/common.c | 4
kernel/sched/core.c | 89 ++
kernel/sched/deadline.c | 2
kernel/sched/fair.c | 55 -
kernel/sched/features.h | 5
kernel/sched/sched.h | 41 -
kernel/softirq.c | 15
kernel/time/Kconfig | 16
kernel/time/clockevents.c | 48 +
kernel/time/hrtimer.c | 1116 +++++++++++++++++++---------------
kernel/time/tick-broadcast-hrtimer.c | 1
kernel/time/tick-sched.c | 27
kernel/time/timekeeping.c | 184 +++++
kernel/time/timekeeping.h | 2
kernel/time/timer_list.c | 12
lib/rbtree.c | 17
lib/timerqueue.c | 14
36 files changed, 1497 insertions(+), 728 deletions(-)
| null | null | null | [patch 00/48] hrtimer,sched: General optimizations and hrtick
enablement | On 2/24/26 16:38, Thomas Gleixner wrote:
Should this also be EXIT_TO_USER_MODE_WORK_IRQ?
I guess it doesn't really matter for now (since arm64 doesn't have the generic entry
path and generic TIF bits yet and therefore HRTIMER_REARM_DEFERRED=n), but I've been
playing around with the this series, the generic entry series
https://lore.kernel.org/lkml/20260203133728.848283-1-ruanjinjie@huawei.com
(and using generic TIF bits) and noticed this. | {
"author": "Christian Loehle <christian.loehle@arm.com>",
"date": "Fri, 27 Feb 2026 15:57:55 +0000",
"is_openbsd": false,
"thread_id": "51f8535f-b98c-48e3-ba7b-f28759a92c16@arm.com.mbox.gz"
} |
lkml_critique | lkml | From: Markus Elfring <elfring@users.sourceforge.net>
Date: Fri, 27 Feb 2026 10:16:50 +0100
Use an additional label so that a bit of common code can be better reused
at the end of this function implementation.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
---
drivers/i3c/master/dw-i3c-master.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
index d87bde3f7700..beb40d947e6e 100644
--- a/drivers/i3c/master/dw-i3c-master.c
+++ b/drivers/i3c/master/dw-i3c-master.c
@@ -1113,8 +1113,7 @@ static int dw_i3c_master_i2c_xfers(struct i2c_dev_desc *dev,
dev_err(master->dev,
"<%s> cannot resume i3c bus master, err: %d\n",
__func__, ret);
- dw_i3c_master_free_xfer(xfer);
- return ret;
+ goto free_xfer;
}
for (i = 0; i < i2c_nxfers; i++) {
@@ -1144,10 +1143,10 @@ static int dw_i3c_master_i2c_xfers(struct i2c_dev_desc *dev,
if (!wait_for_completion_timeout(&xfer->comp, m->i2c.timeout))
dw_i3c_master_dequeue_xfer(master, xfer);
+ pm_runtime_put_autosuspend(master->dev);
ret = xfer->ret;
+free_xfer:
dw_i3c_master_free_xfer(xfer);
-
- pm_runtime_put_autosuspend(master->dev);
return ret;
}
--
2.53.0
| null | null | null | [PATCH] i3c: dw: Use more common code in dw_i3c_master_i2c_xfers() | On Fri, Feb 27, 2026 at 11:20:59AM +0100, Markus Elfring wrote:
Reviewed-by: Frank Li <Frank.Li@nxp.com> | {
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Fri, 27 Feb 2026 10:58:10 -0500",
"is_openbsd": false,
"thread_id": "aaG_El2oEFOwjn9l@lizhi-Precision-Tower-5810.mbox.gz"
} |
lkml_critique | lkml | Hi,
This series add support for SFPs ports available on the LAN966x PCI
device. In order to have the SFPs supported, additional devices are
needed such as clock controller and I2C.
As a reminder, the LAN966x PCI device driver use a device-tree overlay
to describe devices available on the PCI board. Adding support for SFPs
ports consists in adding more devices in the already existing
device-tree overlay.
With those devices added, the device-tree overlay is more complex and
some consumer/supplier relationship are needed in order to remove
devices in correct order when the LAN966x PCI driver is removed.
Those links are typically provided by fw_devlink and we faced some
issues with fw_devlink and overlays.
This series gives the big picture related to the SFPs support from
fixing issues to adding new devices. Of course, it can be split if
needed.
The first part of the series (patch 1, 2 and 3) fixes fw_devlink when it
is used with overlay. Patches 1 and 3 were previously sent by Saravana
[0]. I rebased them on top of v7.0-rc1 and added patch 2 in order to
take into account feedback received on the series sent by Saravana.
Also I added a call to driver_deferred_probe_trigger() in Saravana's
patch (patch 3) to ensure that probes are retried after the modification
performed on the dangling consumers. This allows to fix issues reported
by Matti and Geert [2] with the previous iteration patches.
Those modification were not sufficient in our case and so, on top of
that, patches 4 to 6 fix some more issues related to fw_devlink.
Patches 7 to 12 introduce and use fw_devlink_set_device() in already
existing code.
Patches 13 and 14 are related also to fw_devlink but specific to PCI and
the device-tree nodes created during enumeration.
Patches 15, 16 and 17 are related fw_devlink too but specific to I2C
muxes. Patches purpose is to correctly set a link between an adapter
supplier and its consumer. Indeed, an i2c mux adapter's parent is not
the i2c mux supplier but the adapter the i2c mux is connected to. Adding
a new link between the adapter supplier involved when i2c muxes are used
avoid a freeze observed during device removal.
Patch 18 adds support for fw_delink on x86. fw_devlink is needed to have
the consumer/supplier relationship between devices in order to ensure a
correct device removal order. Adding fw_devlink support for x86 has been
tried in the past but was reverted [1] because it broke some systems.
Instead of enabling fw_devlink on *all* x86 system, enable it on *all*
x86 except on those where it leads to issue.
Patches 19 and 20 allow to build clock and i2c controller used by the
LAN966x PCI device when the LAN966x PCI device is enabled.
Patches 21 to 25 are specific to the LAN966x. They touch the current
dtso, split it in dtsi/dtso files, rename the dtso and improve the
driver to allow easier support for other boards.
The next patch (patch 26) update the LAN966x device-tree overlay itself
to have the SPF ports and devices they depends on described.
The last two patches (patches 27 and 29) sort the existing drivers in
the needed driver list available in the Kconfig help and add new drivers
in this list keep the list up to date with the devices described in the
device-tree overlay.
We believe some items from the above list can be merged separately, with
no build dependencies. We expect:
- Patches 1 to 6 to be taken by driver core maintainers
- Patches 7 to 12 to be taken by driver core maintainers
- Patches 13 and 14 to be taken by driver core or PCI maintainers
(depend on patch 7)
- Patches 15 to 17 to be taken by I2C maintainers
- Patch 18 to be taken by driver core or OF maintainers
- Patch 19 to be taken by clock maintainers
- Patch 20 to be taken by I2C maintainers
- Patches 21 to 28 to be taken by misc maintainers
Once again, this series gives the big picture and can be split if
needed. Let me know.
Compare to previous iteration, this v5 series mainly:
- Handle Matti and Geert use cases [2]
- Remove simple-platform-bus driver introduced in v4 and switch the
simple-bus modification back to what was proposed in v3. In the v4
iteration, conclusion was to use v3 changes [3].
[0] https://lore.kernel.org/lkml/20240411235623.1260061-1-saravanak@google.com/
[1] https://lore.kernel.org/lkml/3c1f2473-92ad-bfc4-258e-a5a08ad73dd0@web.de/
[2] https://lore.kernel.org/all/072dde7c-a53c-4525-83ac-57ea38edc0b5@gmail.com/
[3] https://lore.kernel.org/lkml/20251114083056.31553866@bootlin.com/
Best regards,
Hervé
Changes:
v4 -> v5
v4: https://lore.kernel.org/lkml/20251015071420.1173068-1-herve.codina@bootlin.com/
- Patch 2:
Add 'Acked-by: Ulf Hansson'
- Patch 3:
Add a call to driver_deferred_probe_trigger()
- Patch 5: (new patch)
Depopulate devices at remove
- Patch 6:
Populate devices at probe.
Switched back to modification proposed in v3
- Patch 7 in v3 removed
- Patch 7 (8 in v4):
Add 'Reviewed-by: Andy Shevchenko'
Add 'Reviewed-by: Ulf Hansson'
- Patch 8 (9 in v4):
Add 'Reviewed-by: Ulf Hansson'
- Patches 9 to 15 (10 to 16 in v3)
No changes
- Patch 16 (17 in v4):
Add 'Reviewed-by: Andi Shyti'
- Patch 17 (18 in v4):
Change an error code from -EINVAL to -ENODEV
Add a blank line and fix a typo in commit log
- Patch 18 (19 in v4):
Simplify of_is_fwnode_add_links_supported().
Move IS_ENABLED(CONFIG_X86) check in of_is_fwnode_add_links_supported().
- Patches 19 to 21 (20 to 22 in v4)
No changes
- Patch 22 (23 in v4)
Update due to simple-platform-bus removal
- Patches 23 to 28 (24 to 29 in v4)
No changes
v3 -> v4
v3: https://lore.kernel.org/lkml/20250613134817.681832-1-herve.codina@bootlin.com/
- Patch 1:
No change
- Patch 2:
Update and fix conflicts. Indeed, since v3 iteration
get_dev_from_fwnode() has been moved to device.h and used by
pmdomain/core.c.
- Patch 3:
remove '#define get_device_from_fwnode()'
- Patch 4:
Fix conflict (rebase v6.17-rc6)
Add 'Reviewed-by: Rafael J. Wysocki'
Add 'Reviewed-by: Saravana Kannan'
- Patch 5 (new in v4):
Introduce simple-platform-bus (binding)
- Patch 6 (5 in v3):
Rework patch and introduce simple-platform-bus
- Patch 7: (new)
Use simple-platform-bus in LAN966x
- Patch 8 (6 in v3):
- No change
- Patch 9 and 10 (7 and 8 in v3):
Add 'Reviewed-by: Andy Shevchenko'
- Patch 11 and 12 (9 and 10 in v3):
Add 'Reviewed-by: Dave Jiang'
- Patch 13 (11 in v3):
Add 'Reviewed-by: Andy Shevchenko'
- Patch 12 in v3:
Patch removed.
Adding __private tag in fwnode.dev is going to be handled in a
dedicated series. Indeed a test robot reported an issue and more
patches are needed (I have missed fwnode.dev users in several part
in the kernel).
- Patch 14 and 15 (13 and 14 in v3):
No change
- Patch 16 (14 in v3):
Add 'Reviewed-by: Andi Shyti'
- Patch 17 and 18 (16 and 17 in v3):
No change
- Patch 19 (18 in v3):
Filter out support for fw_devlink on x86 based on some device-tree
properties.
Rewrite commit changelog
Remove 'Reviewed-by: Andy Shevchenko' (significant modification)
- Patch 20 (19 in v3):
Add 'Acked-by: Stephen Boyd'
- Patch 21 (20 in v3):
Fix conflict (rebase v6.18-rc1)
- Patches 22 to 24 (21 to 23 in v3):
No change
- Patch 25 (24 in v3):
Fix conflict (rebase v6.18-rc1)
Add 'Acked-by: Bjorn Helgaas'
- Patches 26 to 29 (25 to 28 in v3):
No change
v2 -> v3
v2: https://lore.kernel.org/all/20250507071315.394857-1-herve.codina@bootlin.com/
- Patch 1:
Add 'Acked-by: Mark Brown'
- Patch 2 and 3:
No changes
- Patch 4:
Rewrite the WARN_ON() condition to avoid an additional 'if'
- Patch 5:
Fix typos in commit log
Update a comment
Remove the unneeded check before calling of_platform_depopulate()
- Patches 6 to 11:
No changes
- Patch 12 (new in v3)
Tag the fwnode dev member as private
- Patch 13 (12 in v2)
Fix a typo in the commit log
- Patches 14 to 16 (13 to 15 in v2)
No changes
- Patch 17 (16 in v2)
Check parent_physdev for NULL
- Patch 18 (17 in v2)
Capitalize "Link:"
Add 'Reviewed-by: Andy Shevchenko'
- Patch 19 (18 in v2)
No changes
- Patch 20 (19 in v2)
Add 'Acked-by: Andi Shyti'
- Patch 21 (20 in v2)
No changes
- Patch 22 (21 in v2)
Add 'Reviewed-by: Andrew Lunn'
- Patch 23 (22 in v2)
Add 'Reviewed-by: Andrew Lunn'
- Patch 24 (new in v3)
Introduce PCI_DEVICE_ID_EFAR_LAN9662, the LAN966x PCI device ID
- Patch 25 (23 in v2)
Add 'Reviewed-by: Andrew Lunn'
Use PCI_DEVICE_DATA() with PCI_DEVICE_ID_EFAR_LAN9662 instead of
PCI_VDEVICE()
- Patch 26 to 28 (24 to 26 in v2)
No changes
v1 -> v2
v1: https://lore.kernel.org/lkml/20250407145546.270683-1-herve.codina@bootlin.com/
- Patch 1 and 3
Remove 'From' tag from the commit log
- Patch 2
Add 'Reviewed-by: Andy Shevchenko'
Add 'Reviewed-by: Saravana Kannan'
Add 'Reviewed-by: Luca Ceresoli'
- Patch 4 and 5
No changes
- Patch 6 (new in v2)
Introduce fw_devlink_set_device()
- Patch 7 (new in v2)
Use existing device_set_node() helper.
- Patch 8 to 11 (new in v2)
Use fw_devlink_set_device() in existing code.
- Patch 12 (6 in v1)
Use fw_devlink_add_device()
- Patch 13 (7 in v1)
No changes
- Patch 14 (8 in v1)
Update commit log
Use 'physdev' instead of 'supplier'
Minor fixes in i2c_get_adapter_physdev() kdoc
- Patch 15 and 16 (9 and 10 in v1)
Use 'physdev' instead of 'supplier' (commit log, title and code)
- Patch 17 (11 in v2)
Enable fw_devlink on x86 only if PCI_DYNAMIC_OF_NODES is enabled.
Rework commit log.
- Patch 18, 19 and 20 (12, 13 and 14 in v1)
No changes
- Patch 21 (new in v2)
Split dtso in dtsi/dtso
- Patch 22 (new in v2)
Rename lan966x_pci.dtso using the specific board name
- Patch 23 (new in v2)
Improve the driver introducing board specific data to ease support
for other boards (avoid the direct dtbo reference in the function
loading the dtbo).
- Patch 24 (15 in v1)
Refactor due to dtso split in dtsi/dtso
- Patch 25 (new in v2)
Sort exist driver list in Kconfig help
- Patch 26 (16 in v1)
Keep alphanumeric order for new drivers added in Kconfig help
Herve Codina (26):
driver core: Rename get_dev_from_fwnode() wrapper to
get_device_from_fwnode()
driver core: Avoid warning when removing a device while its supplier
is unbinding
bus: simple-pm-bus: Remove child devices when the bus is unbound
bus: simple-pm-bus: Populate child nodes at probe
driver core: fw_devlink: Introduce fw_devlink_set_device()
drivers: core: Use fw_devlink_set_device()
pinctrl: cs42l43: Use fw_devlink_set_device()
cxl/test: Use device_set_node()
cxl/test: Use fw_devlink_set_device()
PCI: of: Use fw_devlink_set_device()
PCI: of: Set fwnode device of newly created PCI device nodes
PCI: of: Remove fwnode_dev_initialized() call for a PCI root bridge
node
i2c: core: Introduce i2c_get_adapter_physdev()
i2c: mux: Set adapter physical device
i2c: mux: Create missing devlink between mux and adapter physical
device
of: property: Allow fw_devlink device-tree on x86
clk: lan966x: Add MCHP_LAN966X_PCI dependency
i2c: busses: at91: Add MCHP_LAN966X_PCI dependency
misc: lan966x_pci: Fix dtso nodes ordering
misc: lan966x_pci: Split dtso in dtsi/dtso
misc: lan966x_pci: Rename lan966x_pci.dtso to
lan966x_evb_lan9662_nic.dtso
PCI: Add Microchip LAN9662 PCI Device ID
misc: lan966x_pci: Introduce board specific data
misc: lan966x_pci: Add dtsi/dtso nodes in order to support SFPs
misc: lan966x_pci: Sort the drivers list in Kconfig help
misc: lan966x_pci: Add drivers needed to support SFPs in Kconfig help
Saravana Kannan (2):
Revert "treewide: Fix probing of devices in DT overlays"
of: dynamic: Fix overlayed devices not probing because of fw_devlink
MAINTAINERS | 3 +-
drivers/base/core.c | 108 ++++++++++---
drivers/bus/imx-weim.c | 6 -
drivers/bus/simple-pm-bus.c | 24 +--
drivers/clk/Kconfig | 2 +-
drivers/i2c/busses/Kconfig | 2 +-
drivers/i2c/i2c-core-base.c | 16 ++
drivers/i2c/i2c-core-of.c | 5 -
drivers/i2c/i2c-mux.c | 26 ++++
drivers/misc/Kconfig | 11 +-
drivers/misc/Makefile | 2 +-
drivers/misc/lan966x_evb_lan9662_nic.dtso | 167 ++++++++++++++++++++
drivers/misc/lan966x_pci.c | 30 +++-
drivers/misc/lan966x_pci.dtsi | 172 +++++++++++++++++++++
drivers/misc/lan966x_pci.dtso | 177 ----------------------
drivers/of/dynamic.c | 1 -
drivers/of/overlay.c | 15 ++
drivers/of/platform.c | 5 -
drivers/of/property.c | 26 +++-
drivers/pci/of.c | 10 +-
drivers/pci/quirks.c | 2 +-
drivers/pinctrl/cirrus/pinctrl-cs42l43.c | 2 +-
drivers/pmdomain/core.c | 4 +-
drivers/spi/spi.c | 5 -
include/linux/device.h | 2 +-
include/linux/fwnode.h | 7 +
include/linux/i2c.h | 3 +
include/linux/pci_ids.h | 1 +
tools/testing/cxl/test/cxl.c | 4 +-
29 files changed, 584 insertions(+), 254 deletions(-)
create mode 100644 drivers/misc/lan966x_evb_lan9662_nic.dtso
create mode 100644 drivers/misc/lan966x_pci.dtsi
delete mode 100644 drivers/misc/lan966x_pci.dtso
--
2.53.0
| null | null | null | [PATCH v5 00/28] lan966x pci device: Add support for SFPs | On Fri, Feb 27, 2026 at 02:54:04PM +0100, Herve Codina wrote:
Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Thanks,
Charles | {
"author": "Charles Keepax <ckeepax@opensource.cirrus.com>",
"date": "Fri, 27 Feb 2026 15:57:51 +0000",
"is_openbsd": false,
"thread_id": "aaG%2FF3JKIpsefS5f@opensource.cirrus.com.mbox.gz"
} |
lkml_critique | lkml | Hi,
This series add support for SFPs ports available on the LAN966x PCI
device. In order to have the SFPs supported, additional devices are
needed such as clock controller and I2C.
As a reminder, the LAN966x PCI device driver use a device-tree overlay
to describe devices available on the PCI board. Adding support for SFPs
ports consists in adding more devices in the already existing
device-tree overlay.
With those devices added, the device-tree overlay is more complex and
some consumer/supplier relationship are needed in order to remove
devices in correct order when the LAN966x PCI driver is removed.
Those links are typically provided by fw_devlink and we faced some
issues with fw_devlink and overlays.
This series gives the big picture related to the SFPs support from
fixing issues to adding new devices. Of course, it can be split if
needed.
The first part of the series (patch 1, 2 and 3) fixes fw_devlink when it
is used with overlay. Patches 1 and 3 were previously sent by Saravana
[0]. I rebased them on top of v7.0-rc1 and added patch 2 in order to
take into account feedback received on the series sent by Saravana.
Also I added a call to driver_deferred_probe_trigger() in Saravana's
patch (patch 3) to ensure that probes are retried after the modification
performed on the dangling consumers. This allows to fix issues reported
by Matti and Geert [2] with the previous iteration patches.
Those modification were not sufficient in our case and so, on top of
that, patches 4 to 6 fix some more issues related to fw_devlink.
Patches 7 to 12 introduce and use fw_devlink_set_device() in already
existing code.
Patches 13 and 14 are related also to fw_devlink but specific to PCI and
the device-tree nodes created during enumeration.
Patches 15, 16 and 17 are related fw_devlink too but specific to I2C
muxes. Patches purpose is to correctly set a link between an adapter
supplier and its consumer. Indeed, an i2c mux adapter's parent is not
the i2c mux supplier but the adapter the i2c mux is connected to. Adding
a new link between the adapter supplier involved when i2c muxes are used
avoid a freeze observed during device removal.
Patch 18 adds support for fw_delink on x86. fw_devlink is needed to have
the consumer/supplier relationship between devices in order to ensure a
correct device removal order. Adding fw_devlink support for x86 has been
tried in the past but was reverted [1] because it broke some systems.
Instead of enabling fw_devlink on *all* x86 system, enable it on *all*
x86 except on those where it leads to issue.
Patches 19 and 20 allow to build clock and i2c controller used by the
LAN966x PCI device when the LAN966x PCI device is enabled.
Patches 21 to 25 are specific to the LAN966x. They touch the current
dtso, split it in dtsi/dtso files, rename the dtso and improve the
driver to allow easier support for other boards.
The next patch (patch 26) update the LAN966x device-tree overlay itself
to have the SPF ports and devices they depends on described.
The last two patches (patches 27 and 29) sort the existing drivers in
the needed driver list available in the Kconfig help and add new drivers
in this list keep the list up to date with the devices described in the
device-tree overlay.
We believe some items from the above list can be merged separately, with
no build dependencies. We expect:
- Patches 1 to 6 to be taken by driver core maintainers
- Patches 7 to 12 to be taken by driver core maintainers
- Patches 13 and 14 to be taken by driver core or PCI maintainers
(depend on patch 7)
- Patches 15 to 17 to be taken by I2C maintainers
- Patch 18 to be taken by driver core or OF maintainers
- Patch 19 to be taken by clock maintainers
- Patch 20 to be taken by I2C maintainers
- Patches 21 to 28 to be taken by misc maintainers
Once again, this series gives the big picture and can be split if
needed. Let me know.
Compare to previous iteration, this v5 series mainly:
- Handle Matti and Geert use cases [2]
- Remove simple-platform-bus driver introduced in v4 and switch the
simple-bus modification back to what was proposed in v3. In the v4
iteration, conclusion was to use v3 changes [3].
[0] https://lore.kernel.org/lkml/20240411235623.1260061-1-saravanak@google.com/
[1] https://lore.kernel.org/lkml/3c1f2473-92ad-bfc4-258e-a5a08ad73dd0@web.de/
[2] https://lore.kernel.org/all/072dde7c-a53c-4525-83ac-57ea38edc0b5@gmail.com/
[3] https://lore.kernel.org/lkml/20251114083056.31553866@bootlin.com/
Best regards,
Hervé
Changes:
v4 -> v5
v4: https://lore.kernel.org/lkml/20251015071420.1173068-1-herve.codina@bootlin.com/
- Patch 2:
Add 'Acked-by: Ulf Hansson'
- Patch 3:
Add a call to driver_deferred_probe_trigger()
- Patch 5: (new patch)
Depopulate devices at remove
- Patch 6:
Populate devices at probe.
Switched back to modification proposed in v3
- Patch 7 in v3 removed
- Patch 7 (8 in v4):
Add 'Reviewed-by: Andy Shevchenko'
Add 'Reviewed-by: Ulf Hansson'
- Patch 8 (9 in v4):
Add 'Reviewed-by: Ulf Hansson'
- Patches 9 to 15 (10 to 16 in v3)
No changes
- Patch 16 (17 in v4):
Add 'Reviewed-by: Andi Shyti'
- Patch 17 (18 in v4):
Change an error code from -EINVAL to -ENODEV
Add a blank line and fix a typo in commit log
- Patch 18 (19 in v4):
Simplify of_is_fwnode_add_links_supported().
Move IS_ENABLED(CONFIG_X86) check in of_is_fwnode_add_links_supported().
- Patches 19 to 21 (20 to 22 in v4)
No changes
- Patch 22 (23 in v4)
Update due to simple-platform-bus removal
- Patches 23 to 28 (24 to 29 in v4)
No changes
v3 -> v4
v3: https://lore.kernel.org/lkml/20250613134817.681832-1-herve.codina@bootlin.com/
- Patch 1:
No change
- Patch 2:
Update and fix conflicts. Indeed, since v3 iteration
get_dev_from_fwnode() has been moved to device.h and used by
pmdomain/core.c.
- Patch 3:
remove '#define get_device_from_fwnode()'
- Patch 4:
Fix conflict (rebase v6.17-rc6)
Add 'Reviewed-by: Rafael J. Wysocki'
Add 'Reviewed-by: Saravana Kannan'
- Patch 5 (new in v4):
Introduce simple-platform-bus (binding)
- Patch 6 (5 in v3):
Rework patch and introduce simple-platform-bus
- Patch 7: (new)
Use simple-platform-bus in LAN966x
- Patch 8 (6 in v3):
- No change
- Patch 9 and 10 (7 and 8 in v3):
Add 'Reviewed-by: Andy Shevchenko'
- Patch 11 and 12 (9 and 10 in v3):
Add 'Reviewed-by: Dave Jiang'
- Patch 13 (11 in v3):
Add 'Reviewed-by: Andy Shevchenko'
- Patch 12 in v3:
Patch removed.
Adding __private tag in fwnode.dev is going to be handled in a
dedicated series. Indeed a test robot reported an issue and more
patches are needed (I have missed fwnode.dev users in several part
in the kernel).
- Patch 14 and 15 (13 and 14 in v3):
No change
- Patch 16 (14 in v3):
Add 'Reviewed-by: Andi Shyti'
- Patch 17 and 18 (16 and 17 in v3):
No change
- Patch 19 (18 in v3):
Filter out support for fw_devlink on x86 based on some device-tree
properties.
Rewrite commit changelog
Remove 'Reviewed-by: Andy Shevchenko' (significant modification)
- Patch 20 (19 in v3):
Add 'Acked-by: Stephen Boyd'
- Patch 21 (20 in v3):
Fix conflict (rebase v6.18-rc1)
- Patches 22 to 24 (21 to 23 in v3):
No change
- Patch 25 (24 in v3):
Fix conflict (rebase v6.18-rc1)
Add 'Acked-by: Bjorn Helgaas'
- Patches 26 to 29 (25 to 28 in v3):
No change
v2 -> v3
v2: https://lore.kernel.org/all/20250507071315.394857-1-herve.codina@bootlin.com/
- Patch 1:
Add 'Acked-by: Mark Brown'
- Patch 2 and 3:
No changes
- Patch 4:
Rewrite the WARN_ON() condition to avoid an additional 'if'
- Patch 5:
Fix typos in commit log
Update a comment
Remove the unneeded check before calling of_platform_depopulate()
- Patches 6 to 11:
No changes
- Patch 12 (new in v3)
Tag the fwnode dev member as private
- Patch 13 (12 in v2)
Fix a typo in the commit log
- Patches 14 to 16 (13 to 15 in v2)
No changes
- Patch 17 (16 in v2)
Check parent_physdev for NULL
- Patch 18 (17 in v2)
Capitalize "Link:"
Add 'Reviewed-by: Andy Shevchenko'
- Patch 19 (18 in v2)
No changes
- Patch 20 (19 in v2)
Add 'Acked-by: Andi Shyti'
- Patch 21 (20 in v2)
No changes
- Patch 22 (21 in v2)
Add 'Reviewed-by: Andrew Lunn'
- Patch 23 (22 in v2)
Add 'Reviewed-by: Andrew Lunn'
- Patch 24 (new in v3)
Introduce PCI_DEVICE_ID_EFAR_LAN9662, the LAN966x PCI device ID
- Patch 25 (23 in v2)
Add 'Reviewed-by: Andrew Lunn'
Use PCI_DEVICE_DATA() with PCI_DEVICE_ID_EFAR_LAN9662 instead of
PCI_VDEVICE()
- Patch 26 to 28 (24 to 26 in v2)
No changes
v1 -> v2
v1: https://lore.kernel.org/lkml/20250407145546.270683-1-herve.codina@bootlin.com/
- Patch 1 and 3
Remove 'From' tag from the commit log
- Patch 2
Add 'Reviewed-by: Andy Shevchenko'
Add 'Reviewed-by: Saravana Kannan'
Add 'Reviewed-by: Luca Ceresoli'
- Patch 4 and 5
No changes
- Patch 6 (new in v2)
Introduce fw_devlink_set_device()
- Patch 7 (new in v2)
Use existing device_set_node() helper.
- Patch 8 to 11 (new in v2)
Use fw_devlink_set_device() in existing code.
- Patch 12 (6 in v1)
Use fw_devlink_add_device()
- Patch 13 (7 in v1)
No changes
- Patch 14 (8 in v1)
Update commit log
Use 'physdev' instead of 'supplier'
Minor fixes in i2c_get_adapter_physdev() kdoc
- Patch 15 and 16 (9 and 10 in v1)
Use 'physdev' instead of 'supplier' (commit log, title and code)
- Patch 17 (11 in v2)
Enable fw_devlink on x86 only if PCI_DYNAMIC_OF_NODES is enabled.
Rework commit log.
- Patch 18, 19 and 20 (12, 13 and 14 in v1)
No changes
- Patch 21 (new in v2)
Split dtso in dtsi/dtso
- Patch 22 (new in v2)
Rename lan966x_pci.dtso using the specific board name
- Patch 23 (new in v2)
Improve the driver introducing board specific data to ease support
for other boards (avoid the direct dtbo reference in the function
loading the dtbo).
- Patch 24 (15 in v1)
Refactor due to dtso split in dtsi/dtso
- Patch 25 (new in v2)
Sort exist driver list in Kconfig help
- Patch 26 (16 in v1)
Keep alphanumeric order for new drivers added in Kconfig help
Herve Codina (26):
driver core: Rename get_dev_from_fwnode() wrapper to
get_device_from_fwnode()
driver core: Avoid warning when removing a device while its supplier
is unbinding
bus: simple-pm-bus: Remove child devices when the bus is unbound
bus: simple-pm-bus: Populate child nodes at probe
driver core: fw_devlink: Introduce fw_devlink_set_device()
drivers: core: Use fw_devlink_set_device()
pinctrl: cs42l43: Use fw_devlink_set_device()
cxl/test: Use device_set_node()
cxl/test: Use fw_devlink_set_device()
PCI: of: Use fw_devlink_set_device()
PCI: of: Set fwnode device of newly created PCI device nodes
PCI: of: Remove fwnode_dev_initialized() call for a PCI root bridge
node
i2c: core: Introduce i2c_get_adapter_physdev()
i2c: mux: Set adapter physical device
i2c: mux: Create missing devlink between mux and adapter physical
device
of: property: Allow fw_devlink device-tree on x86
clk: lan966x: Add MCHP_LAN966X_PCI dependency
i2c: busses: at91: Add MCHP_LAN966X_PCI dependency
misc: lan966x_pci: Fix dtso nodes ordering
misc: lan966x_pci: Split dtso in dtsi/dtso
misc: lan966x_pci: Rename lan966x_pci.dtso to
lan966x_evb_lan9662_nic.dtso
PCI: Add Microchip LAN9662 PCI Device ID
misc: lan966x_pci: Introduce board specific data
misc: lan966x_pci: Add dtsi/dtso nodes in order to support SFPs
misc: lan966x_pci: Sort the drivers list in Kconfig help
misc: lan966x_pci: Add drivers needed to support SFPs in Kconfig help
Saravana Kannan (2):
Revert "treewide: Fix probing of devices in DT overlays"
of: dynamic: Fix overlayed devices not probing because of fw_devlink
MAINTAINERS | 3 +-
drivers/base/core.c | 108 ++++++++++---
drivers/bus/imx-weim.c | 6 -
drivers/bus/simple-pm-bus.c | 24 +--
drivers/clk/Kconfig | 2 +-
drivers/i2c/busses/Kconfig | 2 +-
drivers/i2c/i2c-core-base.c | 16 ++
drivers/i2c/i2c-core-of.c | 5 -
drivers/i2c/i2c-mux.c | 26 ++++
drivers/misc/Kconfig | 11 +-
drivers/misc/Makefile | 2 +-
drivers/misc/lan966x_evb_lan9662_nic.dtso | 167 ++++++++++++++++++++
drivers/misc/lan966x_pci.c | 30 +++-
drivers/misc/lan966x_pci.dtsi | 172 +++++++++++++++++++++
drivers/misc/lan966x_pci.dtso | 177 ----------------------
drivers/of/dynamic.c | 1 -
drivers/of/overlay.c | 15 ++
drivers/of/platform.c | 5 -
drivers/of/property.c | 26 +++-
drivers/pci/of.c | 10 +-
drivers/pci/quirks.c | 2 +-
drivers/pinctrl/cirrus/pinctrl-cs42l43.c | 2 +-
drivers/pmdomain/core.c | 4 +-
drivers/spi/spi.c | 5 -
include/linux/device.h | 2 +-
include/linux/fwnode.h | 7 +
include/linux/i2c.h | 3 +
include/linux/pci_ids.h | 1 +
tools/testing/cxl/test/cxl.c | 4 +-
29 files changed, 584 insertions(+), 254 deletions(-)
create mode 100644 drivers/misc/lan966x_evb_lan9662_nic.dtso
create mode 100644 drivers/misc/lan966x_pci.dtsi
delete mode 100644 drivers/misc/lan966x_pci.dtso
--
2.53.0
| null | null | null | [PATCH v5 00/28] lan966x pci device: Add support for SFPs | On Fri, Feb 27, 2026 at 02:54:06PM +0100, Herve Codina wrote:
Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Thanks,
Charles | {
"author": "Charles Keepax <ckeepax@opensource.cirrus.com>",
"date": "Fri, 27 Feb 2026 15:58:15 +0000",
"is_openbsd": false,
"thread_id": "aaG%2FF3JKIpsefS5f@opensource.cirrus.com.mbox.gz"
} |
lkml_critique | lkml | There are some clocks where the rounding is managed by the hardware, and
the determine_rate() clk ops is just a noop that simply returns 0. Based
on discussions with Stephen at Linux Plumbers Conference, he suggested
adding a flag for this particular case. So let's add a new flag, and
update the clk core so that the determine_rate() clk op is not required
when this flag is set.
This series adds the flag, some kunit tests, and updates all of the
relevant drivers under drivers/clk to use the new flag.
Once this is merged, and in Linus's tree, I can update the few remaining
clk drivers that are outside of drivers/clk via those subsystems at a
later time.
Merge Strategy
--------------
All of this needs to be directly merged by Stephen as one series into
his tree. Subsystem maintainers: please leave a Reviewed-by or Acked-by.
To reduce the noise, I am only CCing people on their respective drivers.
Note this series depends on 3 previously-posted patches in this git pull
to Stephen for v7.1.
https://lore.kernel.org/linux-clk/aZuK4-QJCXUeSxtL@redhat.com/
Hopefully I set the depeendencies up correctly in b4.
Signed-off-by: Brian Masney <bmasney@redhat.com>
---
Brian Masney (13):
clk: add new flag CLK_ROUNDING_FW_MANAGED
clk: test: add test suite for CLK_ROUNDING_FW_MANAGED flag
clk: rp1: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: scpi: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: hisilicon: hi3660-stub: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: imx: scu: drop redundant init.ops variable assignment
clk: imx: scu: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: qcom: rpm: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: qcom: rpmh: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: qcom: smd-rpm: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: renesas: rzg2l-cpg: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: samsung: acpm: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: sprd: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
drivers/clk/clk-rp1.c | 11 +----
drivers/clk/clk-scpi.c | 14 +-----
drivers/clk/clk.c | 24 ++++++++--
drivers/clk/clk_test.c | 85 +++++++++++++++++++++++++++++++++
drivers/clk/hisilicon/clk-hi3660-stub.c | 14 +-----
drivers/clk/imx/clk-scu.c | 23 +--------
drivers/clk/qcom/clk-rpm.c | 16 ++-----
drivers/clk/qcom/clk-rpmh.c | 8 +---
drivers/clk/qcom/clk-smd-rpm.c | 15 +-----
drivers/clk/renesas/rzg2l-cpg.c | 9 +---
drivers/clk/samsung/clk-acpm.c | 14 +-----
drivers/clk/sprd/pll.c | 7 ---
drivers/clk/sprd/pll.h | 2 +-
include/linux/clk-provider.h | 2 +
14 files changed, 123 insertions(+), 121 deletions(-)
---
base-commit: 7d6661873f6b54c75195780a40d66bad3d482d8f
change-id: 20260226-clk-det-rate-fw-managed-4b8d061f31be
prerequisite-patch-id: 59198edc95aca82a29327137ad2af82ec13295b6
prerequisite-patch-id: 8932e170649711d7a80c57784033a37faadd519b
prerequisite-patch-id: 91c7b1851c5d77e504c49ce6bf14b3f8b84e826a
Best regards,
--
Brian Masney <bmasney@redhat.com>
| null | null | null | [PATCH 00/13] clk: add new flag CLK_ROUNDING_FW_MANAGED | Hi Brian,
We plan to fill out the determine rate later, as it can return error.
I guess, maybe we could use CLK_ROUNDING_NOOP, till we have proper .determine_rate() for this driver???
Cheers,
Biju | {
"author": "Biju Das <biju.das.jz@bp.renesas.com>",
"date": "Fri, 27 Feb 2026 15:57:28 +0000",
"is_openbsd": false,
"thread_id": "TY3PR01MB1134626136D6AE06C9A699F798673A@TY3PR01MB11346.jpnprd01.prod.outlook.com.mbox.gz"
} |
lkml_critique | lkml | There are some clocks where the rounding is managed by the hardware, and
the determine_rate() clk ops is just a noop that simply returns 0. Based
on discussions with Stephen at Linux Plumbers Conference, he suggested
adding a flag for this particular case. So let's add a new flag, and
update the clk core so that the determine_rate() clk op is not required
when this flag is set.
This series adds the flag, some kunit tests, and updates all of the
relevant drivers under drivers/clk to use the new flag.
Once this is merged, and in Linus's tree, I can update the few remaining
clk drivers that are outside of drivers/clk via those subsystems at a
later time.
Merge Strategy
--------------
All of this needs to be directly merged by Stephen as one series into
his tree. Subsystem maintainers: please leave a Reviewed-by or Acked-by.
To reduce the noise, I am only CCing people on their respective drivers.
Note this series depends on 3 previously-posted patches in this git pull
to Stephen for v7.1.
https://lore.kernel.org/linux-clk/aZuK4-QJCXUeSxtL@redhat.com/
Hopefully I set the depeendencies up correctly in b4.
Signed-off-by: Brian Masney <bmasney@redhat.com>
---
Brian Masney (13):
clk: add new flag CLK_ROUNDING_FW_MANAGED
clk: test: add test suite for CLK_ROUNDING_FW_MANAGED flag
clk: rp1: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: scpi: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: hisilicon: hi3660-stub: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: imx: scu: drop redundant init.ops variable assignment
clk: imx: scu: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: qcom: rpm: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: qcom: rpmh: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: qcom: smd-rpm: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: renesas: rzg2l-cpg: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: samsung: acpm: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
clk: sprd: drop determine_rate op and use CLK_ROUNDING_FW_MANAGED flag
drivers/clk/clk-rp1.c | 11 +----
drivers/clk/clk-scpi.c | 14 +-----
drivers/clk/clk.c | 24 ++++++++--
drivers/clk/clk_test.c | 85 +++++++++++++++++++++++++++++++++
drivers/clk/hisilicon/clk-hi3660-stub.c | 14 +-----
drivers/clk/imx/clk-scu.c | 23 +--------
drivers/clk/qcom/clk-rpm.c | 16 ++-----
drivers/clk/qcom/clk-rpmh.c | 8 +---
drivers/clk/qcom/clk-smd-rpm.c | 15 +-----
drivers/clk/renesas/rzg2l-cpg.c | 9 +---
drivers/clk/samsung/clk-acpm.c | 14 +-----
drivers/clk/sprd/pll.c | 7 ---
drivers/clk/sprd/pll.h | 2 +-
include/linux/clk-provider.h | 2 +
14 files changed, 123 insertions(+), 121 deletions(-)
---
base-commit: 7d6661873f6b54c75195780a40d66bad3d482d8f
change-id: 20260226-clk-det-rate-fw-managed-4b8d061f31be
prerequisite-patch-id: 59198edc95aca82a29327137ad2af82ec13295b6
prerequisite-patch-id: 8932e170649711d7a80c57784033a37faadd519b
prerequisite-patch-id: 91c7b1851c5d77e504c49ce6bf14b3f8b84e826a
Best regards,
--
Brian Masney <bmasney@redhat.com>
| null | null | null | [PATCH 00/13] clk: add new flag CLK_ROUNDING_FW_MANAGED | On Fri, Feb 27, 2026 at 03:57:28PM +0000, Biju Das wrote:
OK, if you are planning to fill out the determine rate, then I'll just
skip over this driver to avoid the code churn.
Brian | {
"author": "Brian Masney <bmasney@redhat.com>",
"date": "Fri, 27 Feb 2026 11:01:12 -0500",
"is_openbsd": false,
"thread_id": "TY3PR01MB1134626136D6AE06C9A699F798673A@TY3PR01MB11346.jpnprd01.prod.outlook.com.mbox.gz"
} |
lkml_critique | netdev | From: Jiayuan Chen <jiayuan.chen@shopee.com>
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL, causing a null-ptr-deref.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide. This allows the
path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
reached on a bond that was never opened.
The normal TX path (bond_xmit_roundrobin) is not affected because TX
requires the bond to be UP, which guarantees rr_tx_counter is allocated.
However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
pattern via bond_xmit_roundrobin_slave_get() and could theoretically
hit the same issue.
Fix this by introducing bond_create_init() to allocate rr_tx_counter
unconditionally at device creation time. It is called from both
bond_create() and bond_newlink() before register_netdevice(), and
returns -ENOMEM on failure so callers can propagate the error cleanly.
bond_setup() is not suitable for this allocation as it is a void
callback with no error return path. The conditional allocation in
bond_open() is removed. Since bond_destructor() already unconditionally
calls free_percpu(bond->rr_tx_counter), the lifecycle is clean:
allocate at creation, free at destruction.
Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave device")
Reported-by: syzbot+80e046b8da2820b6ba73@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698f84c6.a70a0220.2c38d7.00cc.GAE@google.com/T/
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
drivers/net/bonding/bond_main.c | 18 ++++++++++++------
drivers/net/bonding/bond_netlink.c | 4 ++++
include/net/bonding.h | 1 +
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 78cff904cdc3..806034dc301f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4273,18 +4273,18 @@ void bond_work_cancel_all(struct bonding *bond)
cancel_delayed_work_sync(&bond->peer_notify_work);
}
+int bond_create_init(struct bonding *bond)
+{
+ bond->rr_tx_counter = alloc_percpu(u32);
+ return bond->rr_tx_counter ? 0 : -ENOMEM;
+}
+
static int bond_open(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct list_head *iter;
struct slave *slave;
- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
- bond->rr_tx_counter = alloc_percpu(u32);
- if (!bond->rr_tx_counter)
- return -ENOMEM;
- }
-
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@@ -6458,6 +6458,12 @@ int bond_create(struct net *net, const char *name)
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;
+ res = bond_create_init(bond);
+ if (res) {
+ free_netdev(bond_dev);
+ goto out;
+ }
+
res = register_netdevice(bond_dev);
if (res < 0) {
free_netdev(bond_dev);
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 286f11c517f7..91595df85f06 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -598,6 +598,10 @@ static int bond_newlink(struct net_device *bond_dev,
struct nlattr **tb = params->tb;
int err;
+ err = bond_create_init(bond);
+ if (err)
+ return err;
+
err = register_netdevice(bond_dev);
if (err)
return err;
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 4ad5521e7731..dac4725f3ac0 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -714,6 +714,7 @@ void bond_slave_arr_work_rearm(struct bonding *bond, unsigned long delay);
void bond_peer_notify_work_rearm(struct bonding *bond, unsigned long delay);
void bond_work_init_all(struct bonding *bond);
void bond_work_cancel_all(struct bonding *bond);
+int bond_create_init(struct bonding *bond);
#ifdef CONFIG_PROC_FS
void bond_create_proc_entry(struct bonding *bond);
--
2.43.0
| null | null | null | [PATCH net v2 1/2] bonding: fix null-ptr-deref in bond_rr_gen_slave_id() | syzkaller reported a kernel panic [1] with the following crash stack:
Call Trace:
BUG: unable to handle page fault for address: ffff8ebd08580000
PF: supervisor write access in kernel mode
PF: error_code(0x0002) - not-present page
PGD 11f201067 P4D 11f201067 PUD 0
Oops: Oops: 0002 [#1] SMP PTI
CPU: 2 UID: 0 PID: 451 Comm: test_progs Not tainted 6.19.0+ #161 PREEMPT_RT
RIP: 0010:bond_rr_gen_slave_id+0x90/0xd0
RSP: 0018:ffffd3f4815f3448 EFLAGS: 00010246
RAX: 0000000000000001 RBX: 0000000000000001 RCX: ffff8ebc8728b17e
RDX: 0000000000000000 RSI: ffffd3f4815f3538 RDI: ffff8ebc8abcce40
RBP: ffffd3f4815f3460 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffffd3f4815f3538
R13: ffff8ebc8abcce40 R14: ffff8ebc8728b17f R15: ffff8ebc8728b170
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffff8ebd08580000 CR3: 000000010a808006 CR4: 0000000000770ef0
PKRU: 55555554
Call Trace:
<TASK>
bond_xdp_get_xmit_slave+0xc0/0x240
xdp_master_redirect+0x74/0xc0
bpf_prog_run_generic_xdp+0x2f2/0x3f0
do_xdp_generic+0x1fd/0x3d0
__netif_receive_skb_core.constprop.0+0x30d/0x1220
__netif_receive_skb_list_core+0xfc/0x250
netif_receive_skb_list_internal+0x20c/0x3d0
? eth_type_trans+0x137/0x160
netif_receive_skb_list+0x25/0x140
xdp_test_run_batch.constprop.0+0x65b/0x6e0
bpf_test_run_xdp_live+0x1ec/0x3b0
bpf_prog_test_run_xdp+0x49d/0x6e0
__sys_bpf+0x446/0x27b0
__x64_sys_bpf+0x1a/0x30
x64_sys_call+0x146c/0x26e0
do_syscall_64+0xd3/0x1510
entry_SYSCALL_64_after_hwframe+0x76/0x7e
Problem Description
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide.
Solution
Patch 1: Add rr_tx_counter initialization in bond_create_init().
Patch 2: Add a selftest that reproduces the above scenario.
Changes since v1:
https://lore.kernel.org/netdev/20260224112545.37888-1-jiayuan.chen@linux.dev/T/#t
- Moved the guard for NULL rr_tx_counter from xdp_master_redirect()
into the bonding subsystem itself
(Suggested by Sebastian Andrzej Siewior <bigeasy@linutronix.de>)
[1] https://syzkaller.appspot.com/bug?extid=80e046b8da2820b6ba73
Jiayuan Chen (2):
bonding: fix null-ptr-deref in bond_rr_gen_slave_id()
selftests/bpf: add test for xdp_master_redirect with bond not up
drivers/net/bonding/bond_main.c | 18 ++--
drivers/net/bonding/bond_netlink.c | 4 +
include/net/bonding.h | 1 +
.../selftests/bpf/prog_tests/xdp_bonding.c | 101 +++++++++++++++++-
4 files changed, 116 insertions(+), 8 deletions(-)
--
2.43.0 | {
"author": "Jiayuan Chen <jiayuan.chen@linux.dev>",
"date": "Fri, 27 Feb 2026 17:22:48 +0800",
"is_openbsd": false,
"thread_id": "faed6bde8e7a96021bc9d55176b764c592f6ce08@linux.dev.mbox.gz"
} |
lkml_critique | netdev | From: Jiayuan Chen <jiayuan.chen@shopee.com>
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL, causing a null-ptr-deref.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide. This allows the
path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
reached on a bond that was never opened.
The normal TX path (bond_xmit_roundrobin) is not affected because TX
requires the bond to be UP, which guarantees rr_tx_counter is allocated.
However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
pattern via bond_xmit_roundrobin_slave_get() and could theoretically
hit the same issue.
Fix this by introducing bond_create_init() to allocate rr_tx_counter
unconditionally at device creation time. It is called from both
bond_create() and bond_newlink() before register_netdevice(), and
returns -ENOMEM on failure so callers can propagate the error cleanly.
bond_setup() is not suitable for this allocation as it is a void
callback with no error return path. The conditional allocation in
bond_open() is removed. Since bond_destructor() already unconditionally
calls free_percpu(bond->rr_tx_counter), the lifecycle is clean:
allocate at creation, free at destruction.
Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave device")
Reported-by: syzbot+80e046b8da2820b6ba73@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698f84c6.a70a0220.2c38d7.00cc.GAE@google.com/T/
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
drivers/net/bonding/bond_main.c | 18 ++++++++++++------
drivers/net/bonding/bond_netlink.c | 4 ++++
include/net/bonding.h | 1 +
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 78cff904cdc3..806034dc301f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4273,18 +4273,18 @@ void bond_work_cancel_all(struct bonding *bond)
cancel_delayed_work_sync(&bond->peer_notify_work);
}
+int bond_create_init(struct bonding *bond)
+{
+ bond->rr_tx_counter = alloc_percpu(u32);
+ return bond->rr_tx_counter ? 0 : -ENOMEM;
+}
+
static int bond_open(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct list_head *iter;
struct slave *slave;
- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
- bond->rr_tx_counter = alloc_percpu(u32);
- if (!bond->rr_tx_counter)
- return -ENOMEM;
- }
-
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@@ -6458,6 +6458,12 @@ int bond_create(struct net *net, const char *name)
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;
+ res = bond_create_init(bond);
+ if (res) {
+ free_netdev(bond_dev);
+ goto out;
+ }
+
res = register_netdevice(bond_dev);
if (res < 0) {
free_netdev(bond_dev);
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 286f11c517f7..91595df85f06 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -598,6 +598,10 @@ static int bond_newlink(struct net_device *bond_dev,
struct nlattr **tb = params->tb;
int err;
+ err = bond_create_init(bond);
+ if (err)
+ return err;
+
err = register_netdevice(bond_dev);
if (err)
return err;
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 4ad5521e7731..dac4725f3ac0 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -714,6 +714,7 @@ void bond_slave_arr_work_rearm(struct bonding *bond, unsigned long delay);
void bond_peer_notify_work_rearm(struct bonding *bond, unsigned long delay);
void bond_work_init_all(struct bonding *bond);
void bond_work_cancel_all(struct bonding *bond);
+int bond_create_init(struct bonding *bond);
#ifdef CONFIG_PROC_FS
void bond_create_proc_entry(struct bonding *bond);
--
2.43.0
| null | null | null | [PATCH net v2 1/2] bonding: fix null-ptr-deref in bond_rr_gen_slave_id() | From: Jiayuan Chen <jiayuan.chen@shopee.com>
Add a selftest that reproduces the null-ptr-deref in
bond_rr_gen_slave_id() when XDP redirect targets a bond device in
round-robin mode that was never brought up. The test verifies the fix
by ensuring no crash occurs.
Test setup:
- bond0: active-backup mode, UP, with native XDP (enables
bpf_master_redirect_enabled_key globally)
- bond1: round-robin mode, never UP
- veth1: slave of bond1, with generic XDP (XDP_TX)
- BPF_PROG_TEST_RUN with live frames triggers the redirect path
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
.../selftests/bpf/prog_tests/xdp_bonding.c | 101 +++++++++++++++++-
1 file changed, 99 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c b/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
index fb952703653e..a5b15e464018 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
@@ -191,13 +191,18 @@ static int bonding_setup(struct skeletons *skeletons, int mode, int xmit_policy,
return -1;
}
-static void bonding_cleanup(struct skeletons *skeletons)
+static void link_cleanup(struct skeletons *skeletons)
{
- restore_root_netns();
while (skeletons->nlinks) {
skeletons->nlinks--;
bpf_link__destroy(skeletons->links[skeletons->nlinks]);
}
+}
+
+static void bonding_cleanup(struct skeletons *skeletons)
+{
+ restore_root_netns();
+ link_cleanup(skeletons);
ASSERT_OK(system("ip link delete bond1"), "delete bond1");
ASSERT_OK(system("ip link delete veth1_1"), "delete veth1_1");
ASSERT_OK(system("ip link delete veth1_2"), "delete veth1_2");
@@ -493,6 +498,95 @@ static void test_xdp_bonding_nested(struct skeletons *skeletons)
system("ip link del bond_nest2");
}
+/*
+ * Test that XDP redirect via xdp_master_redirect() does not crash when
+ * the bond master device is not up. When bond is in round-robin mode but
+ * never opened, rr_tx_counter is NULL.
+ */
+static void test_xdp_bonding_redirect_no_up(struct skeletons *skeletons)
+{
+ struct nstoken *nstoken = NULL;
+ int xdp_pass_fd, xdp_tx_fd;
+ int veth1_ifindex;
+ int err;
+ char pkt[ETH_HLEN + 1];
+ struct xdp_md ctx_in = {};
+
+ DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .data_in = &pkt,
+ .data_size_in = sizeof(pkt),
+ .ctx_in = &ctx_in,
+ .ctx_size_in = sizeof(ctx_in),
+ .flags = BPF_F_TEST_XDP_LIVE_FRAMES,
+ .repeat = 1,
+ .batch_size = 1,
+ );
+
+ /* We can't use bonding_setup() because bond will be active */
+ SYS(out, "ip netns add ns_rr_no_up");
+ nstoken = open_netns("ns_rr_no_up");
+ if (!ASSERT_OK_PTR(nstoken, "open ns_rr_no_up"))
+ goto out;
+
+ /* bond0: active-backup, UP with slave veth0.
+ * Attaching native XDP to bond0 enables bpf_master_redirect_enabled_key
+ * globally.
+ */
+ SYS(out, "ip link add bond0 type bond mode active-backup");
+ SYS(out, "ip link add veth0 type veth peer name veth0p");
+ SYS(out, "ip link set veth0 master bond0");
+ SYS(out, "ip link set bond0 up");
+ SYS(out, "ip link set veth0p up");
+
+ /* bond1: round-robin, never UP -> rr_tx_counter stays NULL */
+ SYS(out, "ip link add bond1 type bond mode balance-rr");
+ SYS(out, "ip link add veth1 type veth peer name veth1p");
+ SYS(out, "ip link set veth1 master bond1");
+
+ veth1_ifindex = if_nametoindex("veth1");
+ if (!ASSERT_GT(veth1_ifindex, 0, "veth1_ifindex"))
+ goto out;
+
+ /* Attach native XDP to bond0 -> enables global redirect key */
+ if (xdp_attach(skeletons, skeletons->xdp_tx->progs.xdp_tx, "bond0"))
+ goto out;
+
+ /* Attach generic XDP (XDP_TX) to veth1.
+ * When packets arrive at veth1 via netif_receive_skb, do_xdp_generic()
+ * runs this program. XDP_TX + bond slave triggers xdp_master_redirect().
+ */
+ xdp_tx_fd = bpf_program__fd(skeletons->xdp_tx->progs.xdp_tx);
+ if (!ASSERT_GE(xdp_tx_fd, 0, "xdp_tx prog_fd"))
+ goto out;
+
+ err = bpf_xdp_attach(veth1_ifindex, xdp_tx_fd,
+ XDP_FLAGS_SKB_MODE, NULL);
+ if (!ASSERT_OK(err, "attach generic XDP to veth1"))
+ goto out;
+
+ /* Run BPF_PROG_TEST_RUN with XDP_PASS live frames on veth1.
+ * XDP_PASS frames become SKBs with skb->dev = veth1, entering
+ * netif_receive_skb -> do_xdp_generic -> xdp_master_redirect.
+ * Without the fix, bond_rr_gen_slave_id() dereferences NULL
+ * rr_tx_counter and crashes.
+ */
+ xdp_pass_fd = bpf_program__fd(skeletons->xdp_dummy->progs.xdp_dummy_prog);
+ if (!ASSERT_GE(xdp_pass_fd, 0, "xdp_pass prog_fd"))
+ goto out;
+
+ memset(pkt, 0, sizeof(pkt));
+ ctx_in.data_end = sizeof(pkt);
+ ctx_in.ingress_ifindex = veth1_ifindex;
+
+ err = bpf_prog_test_run_opts(xdp_pass_fd, &opts);
+ ASSERT_OK(err, "xdp_pass test_run should not crash");
+
+out:
+ link_cleanup(skeletons);
+ close_netns(nstoken);
+ SYS_NOFAIL("ip netns del ns_rr_no_up");
+}
+
static void test_xdp_bonding_features(struct skeletons *skeletons)
{
LIBBPF_OPTS(bpf_xdp_query_opts, query_opts);
@@ -680,6 +774,9 @@ void serial_test_xdp_bonding(void)
if (test__start_subtest("xdp_bonding_redirect_multi"))
test_xdp_bonding_redirect_multi(&skeletons);
+ if (test__start_subtest("xdp_bonding_redirect_no_up"))
+ test_xdp_bonding_redirect_no_up(&skeletons);
+
out:
xdp_dummy__destroy(skeletons.xdp_dummy);
xdp_tx__destroy(skeletons.xdp_tx);
--
2.43.0 | {
"author": "Jiayuan Chen <jiayuan.chen@linux.dev>",
"date": "Fri, 27 Feb 2026 17:22:50 +0800",
"is_openbsd": false,
"thread_id": "faed6bde8e7a96021bc9d55176b764c592f6ce08@linux.dev.mbox.gz"
} |
lkml_critique | netdev | From: Jiayuan Chen <jiayuan.chen@shopee.com>
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL, causing a null-ptr-deref.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide. This allows the
path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
reached on a bond that was never opened.
The normal TX path (bond_xmit_roundrobin) is not affected because TX
requires the bond to be UP, which guarantees rr_tx_counter is allocated.
However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
pattern via bond_xmit_roundrobin_slave_get() and could theoretically
hit the same issue.
Fix this by introducing bond_create_init() to allocate rr_tx_counter
unconditionally at device creation time. It is called from both
bond_create() and bond_newlink() before register_netdevice(), and
returns -ENOMEM on failure so callers can propagate the error cleanly.
bond_setup() is not suitable for this allocation as it is a void
callback with no error return path. The conditional allocation in
bond_open() is removed. Since bond_destructor() already unconditionally
calls free_percpu(bond->rr_tx_counter), the lifecycle is clean:
allocate at creation, free at destruction.
Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave device")
Reported-by: syzbot+80e046b8da2820b6ba73@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698f84c6.a70a0220.2c38d7.00cc.GAE@google.com/T/
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
drivers/net/bonding/bond_main.c | 18 ++++++++++++------
drivers/net/bonding/bond_netlink.c | 4 ++++
include/net/bonding.h | 1 +
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 78cff904cdc3..806034dc301f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4273,18 +4273,18 @@ void bond_work_cancel_all(struct bonding *bond)
cancel_delayed_work_sync(&bond->peer_notify_work);
}
+int bond_create_init(struct bonding *bond)
+{
+ bond->rr_tx_counter = alloc_percpu(u32);
+ return bond->rr_tx_counter ? 0 : -ENOMEM;
+}
+
static int bond_open(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct list_head *iter;
struct slave *slave;
- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
- bond->rr_tx_counter = alloc_percpu(u32);
- if (!bond->rr_tx_counter)
- return -ENOMEM;
- }
-
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@@ -6458,6 +6458,12 @@ int bond_create(struct net *net, const char *name)
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;
+ res = bond_create_init(bond);
+ if (res) {
+ free_netdev(bond_dev);
+ goto out;
+ }
+
res = register_netdevice(bond_dev);
if (res < 0) {
free_netdev(bond_dev);
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 286f11c517f7..91595df85f06 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -598,6 +598,10 @@ static int bond_newlink(struct net_device *bond_dev,
struct nlattr **tb = params->tb;
int err;
+ err = bond_create_init(bond);
+ if (err)
+ return err;
+
err = register_netdevice(bond_dev);
if (err)
return err;
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 4ad5521e7731..dac4725f3ac0 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -714,6 +714,7 @@ void bond_slave_arr_work_rearm(struct bonding *bond, unsigned long delay);
void bond_peer_notify_work_rearm(struct bonding *bond, unsigned long delay);
void bond_work_init_all(struct bonding *bond);
void bond_work_cancel_all(struct bonding *bond);
+int bond_create_init(struct bonding *bond);
#ifdef CONFIG_PROC_FS
void bond_create_proc_entry(struct bonding *bond);
--
2.43.0
| null | null | null | [PATCH net v2 1/2] bonding: fix null-ptr-deref in bond_rr_gen_slave_id() | On 2026-02-27 17:22:49 [+0800], Jiayuan Chen wrote:
Wouldn't it be better to put into bond_init()?
I haven't look into it but when can the bond_mode be changed?
Sebastian | {
"author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>",
"date": "Fri, 27 Feb 2026 10:45:58 +0100",
"is_openbsd": false,
"thread_id": "faed6bde8e7a96021bc9d55176b764c592f6ce08@linux.dev.mbox.gz"
} |
lkml_critique | netdev | From: Jiayuan Chen <jiayuan.chen@shopee.com>
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL, causing a null-ptr-deref.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide. This allows the
path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
reached on a bond that was never opened.
The normal TX path (bond_xmit_roundrobin) is not affected because TX
requires the bond to be UP, which guarantees rr_tx_counter is allocated.
However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
pattern via bond_xmit_roundrobin_slave_get() and could theoretically
hit the same issue.
Fix this by introducing bond_create_init() to allocate rr_tx_counter
unconditionally at device creation time. It is called from both
bond_create() and bond_newlink() before register_netdevice(), and
returns -ENOMEM on failure so callers can propagate the error cleanly.
bond_setup() is not suitable for this allocation as it is a void
callback with no error return path. The conditional allocation in
bond_open() is removed. Since bond_destructor() already unconditionally
calls free_percpu(bond->rr_tx_counter), the lifecycle is clean:
allocate at creation, free at destruction.
Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave device")
Reported-by: syzbot+80e046b8da2820b6ba73@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698f84c6.a70a0220.2c38d7.00cc.GAE@google.com/T/
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
drivers/net/bonding/bond_main.c | 18 ++++++++++++------
drivers/net/bonding/bond_netlink.c | 4 ++++
include/net/bonding.h | 1 +
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 78cff904cdc3..806034dc301f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4273,18 +4273,18 @@ void bond_work_cancel_all(struct bonding *bond)
cancel_delayed_work_sync(&bond->peer_notify_work);
}
+int bond_create_init(struct bonding *bond)
+{
+ bond->rr_tx_counter = alloc_percpu(u32);
+ return bond->rr_tx_counter ? 0 : -ENOMEM;
+}
+
static int bond_open(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct list_head *iter;
struct slave *slave;
- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
- bond->rr_tx_counter = alloc_percpu(u32);
- if (!bond->rr_tx_counter)
- return -ENOMEM;
- }
-
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@@ -6458,6 +6458,12 @@ int bond_create(struct net *net, const char *name)
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;
+ res = bond_create_init(bond);
+ if (res) {
+ free_netdev(bond_dev);
+ goto out;
+ }
+
res = register_netdevice(bond_dev);
if (res < 0) {
free_netdev(bond_dev);
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 286f11c517f7..91595df85f06 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -598,6 +598,10 @@ static int bond_newlink(struct net_device *bond_dev,
struct nlattr **tb = params->tb;
int err;
+ err = bond_create_init(bond);
+ if (err)
+ return err;
+
err = register_netdevice(bond_dev);
if (err)
return err;
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 4ad5521e7731..dac4725f3ac0 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -714,6 +714,7 @@ void bond_slave_arr_work_rearm(struct bonding *bond, unsigned long delay);
void bond_peer_notify_work_rearm(struct bonding *bond, unsigned long delay);
void bond_work_init_all(struct bonding *bond);
void bond_work_cancel_all(struct bonding *bond);
+int bond_create_init(struct bonding *bond);
#ifdef CONFIG_PROC_FS
void bond_create_proc_entry(struct bonding *bond);
--
2.43.0
| null | null | null | [PATCH net v2 1/2] bonding: fix null-ptr-deref in bond_rr_gen_slave_id() | 2026/2/27 17:45, "Sebastian Andrzej Siewior" <bigeasy@linutronix.de mailto:bigeasy@linutronix.de?to=%22Sebastian%20Andrzej%20Siewior%22%20%3Cbigeasy%40linutronix.de%3E > wrote:
Thanks! bond_init() (ndo_init) is indeed a better fit, it is called by register_netdevice()
and naturally covers both bond_create() and bond_newlink() without a separate helper.
bond_mode can be changed after device creation via sysfs or netlink, a bond created
in active-backup mode can later be switched to round-robin, which means the allocation
must not be conditional on the mode at creation time. | {
"author": "\"Jiayuan Chen\" <jiayuan.chen@linux.dev>",
"date": "Fri, 27 Feb 2026 10:17:29 +0000",
"is_openbsd": false,
"thread_id": "faed6bde8e7a96021bc9d55176b764c592f6ce08@linux.dev.mbox.gz"
} |
lkml_critique | netdev | From: Jiayuan Chen <jiayuan.chen@shopee.com>
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL, causing a null-ptr-deref.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide. This allows the
path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
reached on a bond that was never opened.
The normal TX path (bond_xmit_roundrobin) is not affected because TX
requires the bond to be UP, which guarantees rr_tx_counter is allocated.
However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
pattern via bond_xmit_roundrobin_slave_get() and could theoretically
hit the same issue.
Fix this by introducing bond_create_init() to allocate rr_tx_counter
unconditionally at device creation time. It is called from both
bond_create() and bond_newlink() before register_netdevice(), and
returns -ENOMEM on failure so callers can propagate the error cleanly.
bond_setup() is not suitable for this allocation as it is a void
callback with no error return path. The conditional allocation in
bond_open() is removed. Since bond_destructor() already unconditionally
calls free_percpu(bond->rr_tx_counter), the lifecycle is clean:
allocate at creation, free at destruction.
Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave device")
Reported-by: syzbot+80e046b8da2820b6ba73@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698f84c6.a70a0220.2c38d7.00cc.GAE@google.com/T/
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
drivers/net/bonding/bond_main.c | 18 ++++++++++++------
drivers/net/bonding/bond_netlink.c | 4 ++++
include/net/bonding.h | 1 +
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 78cff904cdc3..806034dc301f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4273,18 +4273,18 @@ void bond_work_cancel_all(struct bonding *bond)
cancel_delayed_work_sync(&bond->peer_notify_work);
}
+int bond_create_init(struct bonding *bond)
+{
+ bond->rr_tx_counter = alloc_percpu(u32);
+ return bond->rr_tx_counter ? 0 : -ENOMEM;
+}
+
static int bond_open(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct list_head *iter;
struct slave *slave;
- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
- bond->rr_tx_counter = alloc_percpu(u32);
- if (!bond->rr_tx_counter)
- return -ENOMEM;
- }
-
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@@ -6458,6 +6458,12 @@ int bond_create(struct net *net, const char *name)
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;
+ res = bond_create_init(bond);
+ if (res) {
+ free_netdev(bond_dev);
+ goto out;
+ }
+
res = register_netdevice(bond_dev);
if (res < 0) {
free_netdev(bond_dev);
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 286f11c517f7..91595df85f06 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -598,6 +598,10 @@ static int bond_newlink(struct net_device *bond_dev,
struct nlattr **tb = params->tb;
int err;
+ err = bond_create_init(bond);
+ if (err)
+ return err;
+
err = register_netdevice(bond_dev);
if (err)
return err;
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 4ad5521e7731..dac4725f3ac0 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -714,6 +714,7 @@ void bond_slave_arr_work_rearm(struct bonding *bond, unsigned long delay);
void bond_peer_notify_work_rearm(struct bonding *bond, unsigned long delay);
void bond_work_init_all(struct bonding *bond);
void bond_work_cancel_all(struct bonding *bond);
+int bond_create_init(struct bonding *bond);
#ifdef CONFIG_PROC_FS
void bond_create_proc_entry(struct bonding *bond);
--
2.43.0
| null | null | null | [PATCH net v2 1/2] bonding: fix null-ptr-deref in bond_rr_gen_slave_id() | On 2026-02-27 10:17:29 [+0000], Jiayuan Chen wrote:
Must the device be in down state or can this be also changed while the
device is up?
Sebastian | {
"author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>",
"date": "Fri, 27 Feb 2026 11:21:49 +0100",
"is_openbsd": false,
"thread_id": "faed6bde8e7a96021bc9d55176b764c592f6ce08@linux.dev.mbox.gz"
} |
lkml_critique | netdev | From: Jiayuan Chen <jiayuan.chen@shopee.com>
bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
when the bond mode is round-robin. If the bond device was never brought
up, rr_tx_counter remains NULL, causing a null-ptr-deref.
The XDP redirect path can reach this code even when the bond is not up:
bpf_master_redirect_enabled_key is a global static key, so when any bond
device has native XDP attached, the XDP_TX -> xdp_master_redirect()
interception is enabled for all bond slaves system-wide. This allows the
path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
reached on a bond that was never opened.
The normal TX path (bond_xmit_roundrobin) is not affected because TX
requires the bond to be UP, which guarantees rr_tx_counter is allocated.
However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
pattern via bond_xmit_roundrobin_slave_get() and could theoretically
hit the same issue.
Fix this by introducing bond_create_init() to allocate rr_tx_counter
unconditionally at device creation time. It is called from both
bond_create() and bond_newlink() before register_netdevice(), and
returns -ENOMEM on failure so callers can propagate the error cleanly.
bond_setup() is not suitable for this allocation as it is a void
callback with no error return path. The conditional allocation in
bond_open() is removed. Since bond_destructor() already unconditionally
calls free_percpu(bond->rr_tx_counter), the lifecycle is clean:
allocate at creation, free at destruction.
Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave device")
Reported-by: syzbot+80e046b8da2820b6ba73@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698f84c6.a70a0220.2c38d7.00cc.GAE@google.com/T/
Signed-off-by: Jiayuan Chen <jiayuan.chen@shopee.com>
---
drivers/net/bonding/bond_main.c | 18 ++++++++++++------
drivers/net/bonding/bond_netlink.c | 4 ++++
include/net/bonding.h | 1 +
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 78cff904cdc3..806034dc301f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4273,18 +4273,18 @@ void bond_work_cancel_all(struct bonding *bond)
cancel_delayed_work_sync(&bond->peer_notify_work);
}
+int bond_create_init(struct bonding *bond)
+{
+ bond->rr_tx_counter = alloc_percpu(u32);
+ return bond->rr_tx_counter ? 0 : -ENOMEM;
+}
+
static int bond_open(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct list_head *iter;
struct slave *slave;
- if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
- bond->rr_tx_counter = alloc_percpu(u32);
- if (!bond->rr_tx_counter)
- return -ENOMEM;
- }
-
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@@ -6458,6 +6458,12 @@ int bond_create(struct net *net, const char *name)
dev_net_set(bond_dev, net);
bond_dev->rtnl_link_ops = &bond_link_ops;
+ res = bond_create_init(bond);
+ if (res) {
+ free_netdev(bond_dev);
+ goto out;
+ }
+
res = register_netdevice(bond_dev);
if (res < 0) {
free_netdev(bond_dev);
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 286f11c517f7..91595df85f06 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -598,6 +598,10 @@ static int bond_newlink(struct net_device *bond_dev,
struct nlattr **tb = params->tb;
int err;
+ err = bond_create_init(bond);
+ if (err)
+ return err;
+
err = register_netdevice(bond_dev);
if (err)
return err;
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 4ad5521e7731..dac4725f3ac0 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -714,6 +714,7 @@ void bond_slave_arr_work_rearm(struct bonding *bond, unsigned long delay);
void bond_peer_notify_work_rearm(struct bonding *bond, unsigned long delay);
void bond_work_init_all(struct bonding *bond);
void bond_work_cancel_all(struct bonding *bond);
+int bond_create_init(struct bonding *bond);
#ifdef CONFIG_PROC_FS
void bond_create_proc_entry(struct bonding *bond);
--
2.43.0
| null | null | null | [PATCH net v2 1/2] bonding: fix null-ptr-deref in bond_rr_gen_slave_id() | February 27, 2026 at 18:21, "Sebastian Andrzej Siewior" <bigeasy@linutronix.de mailto:bigeasy@linutronix.de?to=%22Sebastian%20Andrzej%20Siewior%22%20%3Cbigeasy%40linutronix.de%3E > wrote:
The mode change requires the device to be DOWN. BOND_OPT_MODE is defined with BOND_OPTFLAG_IFDOWN,
and bond_opt_check_flags() enforces this:
if ((opt->flags & BOND_OPTFLAG_IFDOWN) && (bond->dev->flags & IFF_UP))
return -EBUSY;
The same restriction applies to the netlink path as well. Both sysfs and netlink go
through __bond_opt_set() → bond_opt_check_deps(), which enforces BOND_OPTFLAG_IFDOWN
for mode change. Attempting to change the mode while the device is UP returns -EBUSY
regardless of how the change is requested.
So unconditional allocation in bond_init() covers all cases: whether the device is created in
round-robin mode, or switched to round-robin later
(which requires being DOWN, meaning bond_open() hasn't been called with the new mode yet).
Thanks, | {
"author": "\"Jiayuan Chen\" <jiayuan.chen@linux.dev>",
"date": "Fri, 27 Feb 2026 10:37:53 +0000",
"is_openbsd": false,
"thread_id": "faed6bde8e7a96021bc9d55176b764c592f6ce08@linux.dev.mbox.gz"
} |
lkml_critique | netdev | Set MACB_CAPS_EEE for the Raspberry Pi 5 RP1 southbridge
(Cadence GEM_GXL rev 0x00070109 paired with BCM54213PE PHY).
EEE has been verified on RP1 hardware: the LPI counter registers
at 0x270-0x27c return valid data, the TXLPIEN bit in NCR (bit 19)
controls LPI transmission correctly, and ethtool --show-eee reports
the negotiated state after link-up.
Other GEM variants that share the same LPI register layout (SAMA5D2,
SAME70, PIC32CZ) can be enabled by adding MACB_CAPS_EEE to their
respective config entries once tested.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 3e724417d444..0196a13c0688 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5529,7 +5529,8 @@ static const struct macb_config eyeq5_config = {
static const struct macb_config raspberrypi_rp1_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
MACB_CAPS_JUMBO |
- MACB_CAPS_GEM_HAS_PTP,
+ MACB_CAPS_GEM_HAS_PTP |
+ MACB_CAPS_EEE,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
--
2.51.0
| null | null | null | [PATCH net-next v5 4/5] net: cadence: macb: enable EEE for Raspberry Pi RP1 | The GEM MAC provides four read-only, clear-on-read LPI statistics
registers at offsets 0x270-0x27c:
GEM_RXLPI (0x270): RX LPI transition count (16-bit)
GEM_RXLPITIME (0x274): cumulative RX LPI time (24-bit)
GEM_TXLPI (0x278): TX LPI transition count (16-bit)
GEM_TXLPITIME (0x27c): cumulative TX LPI time (24-bit)
Add register offset definitions, extend struct gem_stats with
corresponding u64 software accumulators, and register the four
counters in gem_statistics[] so they appear in ethtool -S output.
Because the hardware counters clear on read, the existing
macb_update_stats() path accumulates them into the u64 fields on
every stats poll, preventing loss between userspace reads.
These registers are present on SAMA5D2, SAME70, PIC32CZ, and RP1
variants of the Cadence GEM IP and have been confirmed on RP1 via
devmem reads.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 87414a2ddf6e..19aa98d01c8c 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -170,6 +170,10 @@
#define GEM_PCSANNPTX 0x021c /* PCS AN Next Page TX */
#define GEM_PCSANNPLP 0x0220 /* PCS AN Next Page LP */
#define GEM_PCSANEXTSTS 0x023c /* PCS AN Extended Status */
+#define GEM_RXLPI 0x0270 /* RX LPI Transitions */
+#define GEM_RXLPITIME 0x0274 /* RX LPI Time */
+#define GEM_TXLPI 0x0278 /* TX LPI Transitions */
+#define GEM_TXLPITIME 0x027c /* TX LPI Time */
#define GEM_DCFG1 0x0280 /* Design Config 1 */
#define GEM_DCFG2 0x0284 /* Design Config 2 */
#define GEM_DCFG3 0x0288 /* Design Config 3 */
@@ -1043,6 +1047,10 @@ struct gem_stats {
u64 rx_ip_header_checksum_errors;
u64 rx_tcp_checksum_errors;
u64 rx_udp_checksum_errors;
+ u64 rx_lpi_transitions;
+ u64 rx_lpi_time;
+ u64 tx_lpi_transitions;
+ u64 tx_lpi_time;
};
/* Describes the name and offset of an individual statistic register, as
@@ -1142,6 +1150,10 @@ static const struct gem_statistic gem_statistics[] = {
GEM_BIT(NDS_RXERR)),
GEM_STAT_TITLE_BITS(RXUDPCCNT, "rx_udp_checksum_errors",
GEM_BIT(NDS_RXERR)),
+ GEM_STAT_TITLE(RXLPI, "rx_lpi_transitions"),
+ GEM_STAT_TITLE(RXLPITIME, "rx_lpi_time"),
+ GEM_STAT_TITLE(TXLPI, "tx_lpi_transitions"),
+ GEM_STAT_TITLE(TXLPITIME, "tx_lpi_time"),
};
#define GEM_STATS_LEN ARRAY_SIZE(gem_statistics)
--
2.51.0 | {
"author": "Nicolai Buchwitz <nb@tipi-net.de>",
"date": "Fri, 27 Feb 2026 16:06:06 +0100",
"is_openbsd": false,
"thread_id": "20260227150610.242215-3-nb@tipi-net.de.mbox.gz"
} |
lkml_critique | netdev | Set MACB_CAPS_EEE for the Raspberry Pi 5 RP1 southbridge
(Cadence GEM_GXL rev 0x00070109 paired with BCM54213PE PHY).
EEE has been verified on RP1 hardware: the LPI counter registers
at 0x270-0x27c return valid data, the TXLPIEN bit in NCR (bit 19)
controls LPI transmission correctly, and ethtool --show-eee reports
the negotiated state after link-up.
Other GEM variants that share the same LPI register layout (SAMA5D2,
SAME70, PIC32CZ) can be enabled by adding MACB_CAPS_EEE to their
respective config entries once tested.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 3e724417d444..0196a13c0688 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5529,7 +5529,8 @@ static const struct macb_config eyeq5_config = {
static const struct macb_config raspberrypi_rp1_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
MACB_CAPS_JUMBO |
- MACB_CAPS_GEM_HAS_PTP,
+ MACB_CAPS_GEM_HAS_PTP |
+ MACB_CAPS_EEE,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
--
2.51.0
| null | null | null | [PATCH net-next v5 4/5] net: cadence: macb: enable EEE for Raspberry Pi RP1 | The GEM MAC has hardware LPI registers (NCR bit 19: TXLPIEN) but no
built-in idle timer, so asserting TXLPIEN blocks all TX immediately
with no automatic wake. A software idle timer is required, as noted
in Microchip documentation (section 40.6.19): "It is best to use
firmware to control LPI."
Implement phylink managed EEE using the mac_enable_tx_lpi and
mac_disable_tx_lpi callbacks:
- macb_tx_lpi_set(): atomically sets or clears TXLPIEN under the
existing bp->lock spinlock; returns bool indicating whether the
register actually changed, avoiding redundant writes.
- macb_tx_lpi_work_fn(): delayed_work handler that enters LPI if all
TX queues are idle and EEE is still active.
- macb_tx_lpi_schedule(): arms the work timer using the LPI timer
value provided by phylink (default 250 ms). Called from
macb_tx_complete() after each TX drain so the idle countdown
restarts whenever the ring goes quiet.
- macb_tx_lpi_wake(): called from macb_start_xmit() before TSTART.
Clears TXLPIEN and applies a 50 us udelay for PHY wake (IEEE
802.3az Tw_sys_tx is 16.5 us for 1000BASE-T / 30 us for
100BASE-TX; GEM has no hardware enforcement). Only delays when
TXLPIEN was actually set, avoiding overhead on the common path.
The delay is placed after tx_head is advanced so the work_fn's
queue-idle check sees a non-empty ring and cannot race back into
LPI before the frame is transmitted.
- mac_enable_tx_lpi: stores the timer and sets eee_active, then
defers the first LPI entry by 1 second per IEEE 802.3az section
22.7a.
- mac_disable_tx_lpi: clears eee_active, cancels the work, and
deasserts TXLPIEN.
Populate phylink_config lpi_interfaces (MII, GMII, RGMII variants)
and lpi_capabilities (MAC_100FD | MAC_1000FD) so phylink can
negotiate EEE with the PHY and call the callbacks appropriately.
Set lpi_timer_default to 250000 us and eee_enabled_default to true.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb.h | 8 ++
drivers/net/ethernet/cadence/macb_main.c | 112 +++++++++++++++++++++++
2 files changed, 120 insertions(+)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 19aa98d01c8c..c69828b27dae 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -309,6 +309,8 @@
#define MACB_IRXFCS_SIZE 1
/* GEM specific NCR bitfields. */
+#define GEM_TXLPIEN_OFFSET 19
+#define GEM_TXLPIEN_SIZE 1
#define GEM_ENABLE_HS_MAC_OFFSET 31
#define GEM_ENABLE_HS_MAC_SIZE 1
@@ -783,6 +785,7 @@
#define MACB_CAPS_DMA_PTP BIT(22)
#define MACB_CAPS_RSC BIT(23)
#define MACB_CAPS_NO_LSO BIT(24)
+#define MACB_CAPS_EEE BIT(25)
/* LSO settings */
#define MACB_LSO_UFO_ENABLE 0x01
@@ -1369,6 +1372,11 @@ struct macb {
struct work_struct hresp_err_bh_work;
+ /* EEE / LPI state */
+ bool eee_active;
+ struct delayed_work tx_lpi_work;
+ u32 tx_lpi_timer;
+
int rx_bd_rd_prefetch;
int tx_bd_rd_prefetch;
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 02eab26fd98b..c23485f049d3 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -10,6 +10,7 @@
#include <linux/clk-provider.h>
#include <linux/clk.h>
#include <linux/crc32.h>
+#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/firmware/xlnx-zynqmp.h>
@@ -621,6 +622,94 @@ static const struct phylink_pcs_ops macb_phylink_pcs_ops = {
.pcs_config = macb_pcs_config,
};
+static bool macb_tx_lpi_set(struct macb *bp, bool enable)
+{
+ unsigned long flags;
+ u32 old, ncr;
+
+ spin_lock_irqsave(&bp->lock, flags);
+ ncr = macb_readl(bp, NCR);
+ old = ncr;
+ if (enable)
+ ncr |= GEM_BIT(TXLPIEN);
+ else
+ ncr &= ~GEM_BIT(TXLPIEN);
+ if (old != ncr)
+ macb_writel(bp, NCR, ncr);
+ spin_unlock_irqrestore(&bp->lock, flags);
+
+ return old != ncr;
+}
+
+static bool macb_tx_all_queues_idle(struct macb *bp)
+{
+ unsigned int q;
+
+ for (q = 0; q < bp->num_queues; q++) {
+ struct macb_queue *queue = &bp->queues[q];
+
+ if (queue->tx_head != queue->tx_tail)
+ return false;
+ }
+ return true;
+}
+
+static void macb_tx_lpi_work_fn(struct work_struct *work)
+{
+ struct macb *bp = container_of(work, struct macb, tx_lpi_work.work);
+
+ if (bp->eee_active && macb_tx_all_queues_idle(bp))
+ macb_tx_lpi_set(bp, true);
+}
+
+static void macb_tx_lpi_schedule(struct macb *bp)
+{
+ if (bp->eee_active)
+ mod_delayed_work(system_wq, &bp->tx_lpi_work,
+ usecs_to_jiffies(bp->tx_lpi_timer));
+}
+
+/* Wake from LPI before transmitting. The MAC must deassert TXLPIEN
+ * and wait for the PHY to exit LPI before any frame can be sent.
+ * IEEE 802.3az Tw_sys is ~17us for 1000BASE-T, ~30us for 100BASE-TX;
+ * we use a conservative 50us.
+ */
+static void macb_tx_lpi_wake(struct macb *bp)
+{
+ if (!macb_tx_lpi_set(bp, false))
+ return;
+
+ cancel_delayed_work(&bp->tx_lpi_work);
+ udelay(50);
+}
+
+static void macb_mac_disable_tx_lpi(struct phylink_config *config)
+{
+ struct net_device *ndev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(ndev);
+
+ bp->eee_active = false;
+ cancel_delayed_work_sync(&bp->tx_lpi_work);
+ macb_tx_lpi_set(bp, false);
+}
+
+static int macb_mac_enable_tx_lpi(struct phylink_config *config, u32 timer,
+ bool tx_clk_stop)
+{
+ struct net_device *ndev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(ndev);
+
+ bp->tx_lpi_timer = timer;
+ bp->eee_active = true;
+
+ /* Defer initial LPI entry by 1 second after link-up per
+ * IEEE 802.3az section 22.7a.
+ */
+ mod_delayed_work(system_wq, &bp->tx_lpi_work, msecs_to_jiffies(1000));
+
+ return 0;
+}
+
static void macb_mac_config(struct phylink_config *config, unsigned int mode,
const struct phylink_link_state *state)
{
@@ -769,6 +858,8 @@ static const struct phylink_mac_ops macb_phylink_ops = {
.mac_config = macb_mac_config,
.mac_link_down = macb_mac_link_down,
.mac_link_up = macb_mac_link_up,
+ .mac_disable_tx_lpi = macb_mac_disable_tx_lpi,
+ .mac_enable_tx_lpi = macb_mac_enable_tx_lpi,
};
static bool macb_phy_handle_exists(struct device_node *dn)
@@ -864,6 +955,18 @@ static int macb_mii_probe(struct net_device *dev)
}
}
+ /* Configure EEE LPI if supported */
+ if (bp->caps & MACB_CAPS_EEE) {
+ __set_bit(PHY_INTERFACE_MODE_MII,
+ bp->phylink_config.lpi_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_GMII,
+ bp->phylink_config.lpi_interfaces);
+ phy_interface_set_rgmii(bp->phylink_config.lpi_interfaces);
+ bp->phylink_config.lpi_capabilities = MAC_100FD | MAC_1000FD;
+ bp->phylink_config.lpi_timer_default = 250000;
+ bp->phylink_config.eee_enabled_default = true;
+ }
+
bp->phylink = phylink_create(&bp->phylink_config, bp->pdev->dev.fwnode,
bp->phy_interface, &macb_phylink_ops);
if (IS_ERR(bp->phylink)) {
@@ -1260,6 +1363,9 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
netif_wake_subqueue(bp->dev, queue_index);
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
+ if (packets)
+ macb_tx_lpi_schedule(bp);
+
return packets;
}
@@ -2365,6 +2471,8 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index),
skb->len);
+ macb_tx_lpi_wake(bp);
+
spin_lock(&bp->lock);
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
spin_unlock(&bp->lock);
@@ -3026,6 +3134,8 @@ static int macb_close(struct net_device *dev)
netdev_tx_reset_queue(netdev_get_tx_queue(dev, q));
}
+ cancel_delayed_work_sync(&bp->tx_lpi_work);
+
phylink_stop(bp->phylink);
phylink_disconnect_phy(bp->phylink);
@@ -5633,6 +5743,7 @@ static int macb_probe(struct platform_device *pdev)
}
INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task);
+ INIT_DELAYED_WORK(&bp->tx_lpi_work, macb_tx_lpi_work_fn);
netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID),
@@ -5676,6 +5787,7 @@ static void macb_remove(struct platform_device *pdev)
mdiobus_free(bp->mii_bus);
device_set_wakeup_enable(&bp->pdev->dev, 0);
+ cancel_delayed_work_sync(&bp->tx_lpi_work);
cancel_work_sync(&bp->hresp_err_bh_work);
pm_runtime_disable(&pdev->dev);
pm_runtime_dont_use_autosuspend(&pdev->dev);
--
2.51.0 | {
"author": "Nicolai Buchwitz <nb@tipi-net.de>",
"date": "Fri, 27 Feb 2026 16:06:07 +0100",
"is_openbsd": false,
"thread_id": "20260227150610.242215-3-nb@tipi-net.de.mbox.gz"
} |
lkml_critique | netdev | Set MACB_CAPS_EEE for the Raspberry Pi 5 RP1 southbridge
(Cadence GEM_GXL rev 0x00070109 paired with BCM54213PE PHY).
EEE has been verified on RP1 hardware: the LPI counter registers
at 0x270-0x27c return valid data, the TXLPIEN bit in NCR (bit 19)
controls LPI transmission correctly, and ethtool --show-eee reports
the negotiated state after link-up.
Other GEM variants that share the same LPI register layout (SAMA5D2,
SAME70, PIC32CZ) can be enabled by adding MACB_CAPS_EEE to their
respective config entries once tested.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 3e724417d444..0196a13c0688 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5529,7 +5529,8 @@ static const struct macb_config eyeq5_config = {
static const struct macb_config raspberrypi_rp1_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
MACB_CAPS_JUMBO |
- MACB_CAPS_GEM_HAS_PTP,
+ MACB_CAPS_GEM_HAS_PTP |
+ MACB_CAPS_EEE,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
--
2.51.0
| null | null | null | [PATCH net-next v5 4/5] net: cadence: macb: enable EEE for Raspberry Pi RP1 | Add Energy Efficient Ethernet (IEEE 802.3az) support to the Cadence GEM
(macb) driver using phylink's managed EEE framework. The GEM MAC has
hardware LPI registers but no built-in idle timer, so the driver
implements software-managed TX LPI using a delayed_work timer while
delegating EEE negotiation and ethtool state to phylink.
Changes from v4:
- Removed redundant MACB_CAPS_EEE guards from macb_get_eee/set_eee;
phylink already returns -EOPNOTSUPP when lpi_capabilities and
lpi_interfaces are not populated. Based on feedback from Russell King.
- Added patch 5 enabling EEE for Mobileye EyeQ5, tested by Théo Lebrun
using a hardware loopback.
Changes from v3:
- Dropped the register-definitions-only patch; LPI counter offsets
(GEM_RXLPI/RXLPITIME/TXLPI/TXLPITIME) now land in the statistics
patch, and TXLPIEN + MACB_CAPS_EEE are introduced alongside the TX
LPI implementation where they are first used. Series is now 4 patches.
- Add Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com> to all patches.
- Split chained assignment in macb_tx_lpi_set() (suggested by checkpatch).
Changes from v2:
- macb_tx_lpi_set() now returns bool indicating whether the register
value actually changed, avoiding redundant writes.
- Removed tx_lpi_enabled field from struct macb; LPI state is tracked
entirely within the spinlock-protected register read/modify/write.
- macb_tx_lpi_wake() uses the return value of macb_tx_lpi_set() to
skip the cancel/udelay when TXLPIEN was already clear.
All changes based on feedback from Russell King.
Changes from v1:
- Rewrote to use phylink managed EEE (mac_enable_tx_lpi /
mac_disable_tx_lpi callbacks) instead of the obsolete
phy_init_eee() approach, as recommended by Russell King.
- ethtool get_eee/set_eee are now pure phylink passthroughs.
- Removed all manual EEE state tracking from mac_link_up/down;
phylink handles the lifecycle.
The series is structured as follows:
1. LPI statistics: Expose the four hardware EEE counters (RX/TX LPI
transitions and time) through ethtool -S, accumulated in software
since they are clear-on-read. Adds register offset definitions
GEM_RXLPI/RXLPITIME/TXLPI/TXLPITIME (0x270-0x27c).
2. TX LPI engine: Introduces GEM_TXLPIEN (NCR bit 19) and
MACB_CAPS_EEE alongside the implementation that uses them.
phylink mac_enable_tx_lpi / mac_disable_tx_lpi callbacks with a
delayed_work-based idle timer. LPI entry is deferred 1 second
after link-up per IEEE 802.3az. Wake before transmit with a
conservative 50us PHY wake delay (IEEE 802.3az Tw_sys_tx).
3. ethtool EEE ops: get_eee/set_eee delegating to phylink for PHY
negotiation and timer management.
4. RP1 enablement: Set MACB_CAPS_EEE for the Raspberry Pi 5's RP1
southbridge (Cadence GEM_GXL rev 0x00070109 + BCM54213PE PHY).
5. EyeQ5 enablement: Set MACB_CAPS_EEE for the Mobileye EyeQ5 GEM
instance, verified with a hardware loopback by Théo Lebrun.
Tested on Raspberry Pi 5 (1000BASE-T, BCM54213PE PHY, 250ms LPI timer):
iperf3 throughput (no regression):
TCP TX: 937.8 Mbit/s (EEE on) vs 937.0 Mbit/s (EEE off)
TCP RX: 936.5 Mbit/s both
Latency (ping RTT, small expected increase from LPI wake):
1s interval: 0.273 ms (EEE on) vs 0.181 ms (EEE off)
10ms interval: 0.206 ms (EEE on) vs 0.168 ms (EEE off)
flood ping: 0.200 ms (EEE on) vs 0.156 ms (EEE off)
LPI counters (ethtool -S, 1s-interval ping, EEE on):
tx_lpi_transitions: 112
tx_lpi_time: 15574651
Zero packet loss across all tests. Also verified with
ethtool --show-eee / --set-eee and cable unplug/replug cycling.
Nicolai Buchwitz (5):
net: cadence: macb: add EEE LPI statistics counters
net: cadence: macb: implement EEE TX LPI support
net: cadence: macb: add ethtool EEE support
net: cadence: macb: enable EEE for Raspberry Pi RP1
net: cadence: macb: enable EEE for Mobileye EyeQ5
drivers/net/ethernet/cadence/macb.h | 20 +++++
drivers/net/ethernet/cadence/macb_main.c | 133 ++++++++++++++++++++++++++++++-
2 files changed, 151 insertions(+), 2 deletions(-)
--
2.51.0 | {
"author": "Nicolai Buchwitz <nb@tipi-net.de>",
"date": "Fri, 27 Feb 2026 16:06:05 +0100",
"is_openbsd": false,
"thread_id": "20260227150610.242215-3-nb@tipi-net.de.mbox.gz"
} |
lkml_critique | netdev | Set MACB_CAPS_EEE for the Raspberry Pi 5 RP1 southbridge
(Cadence GEM_GXL rev 0x00070109 paired with BCM54213PE PHY).
EEE has been verified on RP1 hardware: the LPI counter registers
at 0x270-0x27c return valid data, the TXLPIEN bit in NCR (bit 19)
controls LPI transmission correctly, and ethtool --show-eee reports
the negotiated state after link-up.
Other GEM variants that share the same LPI register layout (SAMA5D2,
SAME70, PIC32CZ) can be enabled by adding MACB_CAPS_EEE to their
respective config entries once tested.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 3e724417d444..0196a13c0688 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5529,7 +5529,8 @@ static const struct macb_config eyeq5_config = {
static const struct macb_config raspberrypi_rp1_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
MACB_CAPS_JUMBO |
- MACB_CAPS_GEM_HAS_PTP,
+ MACB_CAPS_GEM_HAS_PTP |
+ MACB_CAPS_EEE,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
--
2.51.0
| null | null | null | [PATCH net-next v5 4/5] net: cadence: macb: enable EEE for Raspberry Pi RP1 | Implement get_eee and set_eee ethtool ops for GEM as simple passthroughs
to phylink_ethtool_get_eee() and phylink_ethtool_set_eee().
No MACB_CAPS_EEE guard is needed: phylink returns -EOPNOTSUPP from both
ops when mac_supports_eee is false, which is the case when
lpi_capabilities and lpi_interfaces are not populated. Those fields are
only set when MACB_CAPS_EEE is present (previous patch), so phylink
already handles the unsupported case correctly.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index c23485f049d3..3e724417d444 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -4050,6 +4050,20 @@ static const struct ethtool_ops macb_ethtool_ops = {
.set_ringparam = macb_set_ringparam,
};
+static int macb_get_eee(struct net_device *dev, struct ethtool_keee *eee)
+{
+ struct macb *bp = netdev_priv(dev);
+
+ return phylink_ethtool_get_eee(bp->phylink, eee);
+}
+
+static int macb_set_eee(struct net_device *dev, struct ethtool_keee *eee)
+{
+ struct macb *bp = netdev_priv(dev);
+
+ return phylink_ethtool_set_eee(bp->phylink, eee);
+}
+
static const struct ethtool_ops gem_ethtool_ops = {
.get_regs_len = macb_get_regs_len,
.get_regs = macb_get_regs,
@@ -4072,6 +4086,8 @@ static const struct ethtool_ops gem_ethtool_ops = {
.set_rxnfc = gem_set_rxnfc,
.get_rx_ring_count = gem_get_rx_ring_count,
.nway_reset = phy_ethtool_nway_reset,
+ .get_eee = macb_get_eee,
+ .set_eee = macb_set_eee,
};
static int macb_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
--
2.51.0 | {
"author": "Nicolai Buchwitz <nb@tipi-net.de>",
"date": "Fri, 27 Feb 2026 16:06:08 +0100",
"is_openbsd": false,
"thread_id": "20260227150610.242215-3-nb@tipi-net.de.mbox.gz"
} |
lkml_critique | netdev | Set MACB_CAPS_EEE for the Raspberry Pi 5 RP1 southbridge
(Cadence GEM_GXL rev 0x00070109 paired with BCM54213PE PHY).
EEE has been verified on RP1 hardware: the LPI counter registers
at 0x270-0x27c return valid data, the TXLPIEN bit in NCR (bit 19)
controls LPI transmission correctly, and ethtool --show-eee reports
the negotiated state after link-up.
Other GEM variants that share the same LPI register layout (SAMA5D2,
SAME70, PIC32CZ) can be enabled by adding MACB_CAPS_EEE to their
respective config entries once tested.
Reviewed-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 3e724417d444..0196a13c0688 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5529,7 +5529,8 @@ static const struct macb_config eyeq5_config = {
static const struct macb_config raspberrypi_rp1_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
MACB_CAPS_JUMBO |
- MACB_CAPS_GEM_HAS_PTP,
+ MACB_CAPS_GEM_HAS_PTP |
+ MACB_CAPS_EEE,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
--
2.51.0
| null | null | null | [PATCH net-next v5 4/5] net: cadence: macb: enable EEE for Raspberry Pi RP1 | Set MACB_CAPS_EEE for the Mobileye EyeQ5 GEM instance. EEE has been
verified on EyeQ5 hardware using a loopback setup with ethtool
--show-eee confirming EEE active on both ends at 100baseT/Full and
1000baseT/Full.
Tested-by: Théo Lebrun <theo.lebrun@bootlin.com>
Signed-off-by: Nicolai Buchwitz <nb@tipi-net.de>
---
drivers/net/ethernet/cadence/macb_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 0196a13c0688..58a265ee9f9e 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5518,7 +5518,7 @@ static const struct macb_config versal_config = {
static const struct macb_config eyeq5_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_JUMBO |
MACB_CAPS_GEM_HAS_PTP | MACB_CAPS_QUEUE_DISABLE |
- MACB_CAPS_NO_LSO,
+ MACB_CAPS_NO_LSO | MACB_CAPS_EEE,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = eyeq5_init,
--
2.51.0 | {
"author": "Nicolai Buchwitz <nb@tipi-net.de>",
"date": "Fri, 27 Feb 2026 16:06:10 +0100",
"is_openbsd": false,
"thread_id": "20260227150610.242215-3-nb@tipi-net.de.mbox.gz"
} |
lkml_critique | netdev | Hi,
In bnge_hwrm_func_resc_qcaps() we read the firmware-reported value:
hw_resc->max_hw_ring_grps =
le16_to_cpu(resp->max_hw_ring_grps);
but later overwrite it with:
hw_resc->max_hw_ring_grps = hw_resc->max_rx_rings;
This effectively discards the firmware capability and assumes
max_hw_ring_grps == max_rx_rings.
Is this intentional?
Thanks,
Alok
---
drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
index c46da3413417..9ba72c6b4604 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
@@ -565,7 +565,6 @@ int bnge_hwrm_func_resc_qcaps(struct bnge_dev *bd)
hw_resc->max_stat_ctxs = le16_to_cpu(resp->max_stat_ctx);
hw_resc->max_nqs = le16_to_cpu(resp->max_msix);
- hw_resc->max_hw_ring_grps = hw_resc->max_rx_rings;
hwrm_func_resc_qcaps_exit:
bnge_hwrm_req_drop(bd, req);
--
2.50.1
| null | null | null | [query] about max_hw_ring_grps override in bnge_hwrm_func_resc_qcaps() | On Thu, Feb 26, 2026 at 5:45 PM Alok Tiwari <alok.a.tiwari@oracle.com> wrote:
Thanks for pointing this out.
Yes, this is intentional.
max_hw_ring_grps should be equal to max_rx_rings only.
We can remove hw_resc->max_hw_ring_grps = le16_to_cpu(resp->max_hw_ring_grps).
Older firmware versions used the max_hw_ring_grps field, but it is not
applicable for resource allocation
in the newer firmware versions, and this driver is not meant for older versions.
Thanks,
Vikas | {
"author": "Vikas Gupta <vikas.gupta@broadcom.com>",
"date": "Fri, 27 Feb 2026 14:47:45 +0530",
"is_openbsd": false,
"thread_id": "CAHLZf_sCzu9WGQyD9sj1oBoJVcv7ha7PzhK_gWfqFaJF5eP9HA@mail.gmail.com.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | When disabling a port’s collecting and distributing states, updating only
rx_disabled is not sufficient. We also need to set AD_RX_PORT_DISABLED
so that the rx_machine transitions into the AD_RX_EXPIRED state.
One example is in ad_agg_selection_logic(): when a new aggregator is
selected and old active aggregator is disabled, if AD_RX_PORT_DISABLED is
not set, the disabled port may remain stuck in AD_RX_CURRENT due to
continuing to receive partner LACP messages.
The __disable_port() called by ad_disable_collecting_distributing()
does not have this issue, since its caller also clears the
collecting/distributing bits.
The __disable_port() called by bond_3ad_bind_slave() should also be fine,
as the RX state machine is re-initialized to AD_RX_INITIALIZE.
Let's fix this only in ad_agg_selection_logic() to reduce the chances of
unintended side effects.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
---
drivers/net/bonding/bond_3ad.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index af7f74cfdc08..c47f6a69fd2a 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -1932,6 +1932,7 @@ static void ad_agg_selection_logic(struct aggregator *agg,
if (active) {
for (port = active->lag_ports; port;
port = port->next_port_in_aggregator) {
+ port->sm_rx_state = AD_RX_PORT_DISABLED;
__disable_port(port);
}
}
--
2.50.1 | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Thu, 26 Feb 2026 12:53:28 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | The current ad_churn_machine implementation only transitions the
actor/partner churn state to churned or none after the churn timer expires.
However, IEEE 802.1AX-2014 specifies that a port should enter the none
state immediately once the actor’s port state enters synchronization.
Another issue is that if the churn timer expires while the churn machine is
not in the monitor state (e.g. already in churn), the state may remain
stuck indefinitely with no further transitions. This becomes visible in
multi-aggregator scenarios. For example:
Ports 1 and 2 are in aggregator 1 (active)
Ports 3 and 4 are in aggregator 2 (backup)
Ports 1 and 2 should be in none
Ports 3 and 4 should be in churned
If a failover occurs due to port 2 link down/up, aggregator 2 becomes active.
Under the current implementation, the resulting states may look like:
agg 1 (backup): port 1 -> none, port 2 -> churned
agg 2 (active): ports 3,4 keep in churned.
The root cause is that ad_churn_machine() only clears the
AD_PORT_CHURNED flag and starts a timer. When a churned port becomes active,
its RX state becomes AD_RX_CURRENT, preventing the churn flag from being set
again, leaving no way to retrigger the timer. Fixing this solely in
ad_rx_machine() is insufficient.
This patch rewrites ad_churn_machine according to IEEE 802.1AX-2014
(Figures 6-23 and 6-24), ensuring correct churn detection, state transitions,
and timer behavior. With new implementation, there is no need to set
AD_PORT_CHURNED in ad_rx_machine().
Fixes: 14c9551a32eb ("bonding: Implement port churn-machine (AD standard 43.4.17).")
Reported-by: Liang Li <liali@redhat.com>
Tested-by: Liang Li <liali@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
---
drivers/net/bonding/bond_3ad.c | 96 +++++++++++++++++++++++++---------
1 file changed, 71 insertions(+), 25 deletions(-)
diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index c47f6a69fd2a..68258d61fd1c 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -44,7 +44,6 @@
#define AD_PORT_STANDBY 0x80
#define AD_PORT_SELECTED 0x100
#define AD_PORT_MOVED 0x200
-#define AD_PORT_CHURNED (AD_PORT_ACTOR_CHURN | AD_PORT_PARTNER_CHURN)
/* Port Key definitions
* key is determined according to the link speed, duplex and
@@ -1254,7 +1253,6 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
/* first, check if port was reinitialized */
if (port->sm_vars & AD_PORT_BEGIN) {
port->sm_rx_state = AD_RX_INITIALIZE;
- port->sm_vars |= AD_PORT_CHURNED;
/* check if port is not enabled */
} else if (!(port->sm_vars & AD_PORT_BEGIN) && !port->is_enabled)
port->sm_rx_state = AD_RX_PORT_DISABLED;
@@ -1262,8 +1260,6 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
else if (lacpdu && ((port->sm_rx_state == AD_RX_EXPIRED) ||
(port->sm_rx_state == AD_RX_DEFAULTED) ||
(port->sm_rx_state == AD_RX_CURRENT))) {
- if (port->sm_rx_state != AD_RX_CURRENT)
- port->sm_vars |= AD_PORT_CHURNED;
port->sm_rx_timer_counter = 0;
port->sm_rx_state = AD_RX_CURRENT;
} else {
@@ -1347,7 +1343,6 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
port->partner_oper.port_state |= LACP_STATE_LACP_TIMEOUT;
port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT));
port->actor_oper_port_state |= LACP_STATE_EXPIRED;
- port->sm_vars |= AD_PORT_CHURNED;
break;
case AD_RX_DEFAULTED:
__update_default_selected(port);
@@ -1379,11 +1374,41 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
* ad_churn_machine - handle port churn's state machine
* @port: the port we're looking at
*
+ * IEEE 802.1AX-2014 Figure 6-23 - Actor Churn Detection machine state diagram
+ *
+ * BEGIN || (! port_enabled)
+ * |
+ * (3) (1) v
+ * +----------------------+ ActorPort.Sync +-------------------------+
+ * | NO_ACTOR_CHURN | <--------------------- | ACTOR_CHURN_MONITOR |
+ * |======================| |=========================|
+ * | actor_churn = FALSE; | ! ActorPort.Sync | actor_churn = FALSE; |
+ * | | ---------------------> | Start actor_churn_timer |
+ * +----------------------+ (4) +-------------------------+
+ * ^ |
+ * | |
+ * | actor_churn_timer expired
+ * | |
+ * ActorPort.Sync | (2)
+ * | +--------------------+ |
+ * (3) | | ACTOR_CHURN | |
+ * | |====================| |
+ * +------------- | actor_churn = True | <-----------+
+ * | |
+ * +--------------------+
+ *
+ * Similar for the Figure 6-24 - Partner Churn Detection machine state diagram
+ *
+ * We don’t need to check actor_churn, because it can only be true when the
+ * state is ACTOR_CHURN.
*/
static void ad_churn_machine(struct port *port)
{
- if (port->sm_vars & AD_PORT_CHURNED) {
- port->sm_vars &= ~AD_PORT_CHURNED;
+ bool partner_synced = port->partner_oper.port_state & LACP_STATE_SYNCHRONIZATION;
+ bool actor_synced = port->actor_oper_port_state & LACP_STATE_SYNCHRONIZATION;
+
+ /* ---- 1. begin or port not enabled ---- */
+ if ((port->sm_vars & AD_PORT_BEGIN) || !port->is_enabled) {
port->sm_churn_actor_state = AD_CHURN_MONITOR;
port->sm_churn_partner_state = AD_CHURN_MONITOR;
port->sm_churn_actor_timer_counter =
@@ -1392,25 +1417,46 @@ static void ad_churn_machine(struct port *port)
__ad_timer_to_ticks(AD_PARTNER_CHURN_TIMER, 0);
return;
}
- if (port->sm_churn_actor_timer_counter &&
- !(--port->sm_churn_actor_timer_counter) &&
- port->sm_churn_actor_state == AD_CHURN_MONITOR) {
- if (port->actor_oper_port_state & LACP_STATE_SYNCHRONIZATION) {
- port->sm_churn_actor_state = AD_NO_CHURN;
- } else {
- port->churn_actor_count++;
- port->sm_churn_actor_state = AD_CHURN;
- }
+
+ if (port->sm_churn_actor_timer_counter)
+ port->sm_churn_actor_timer_counter--;
+
+ if (port->sm_churn_partner_timer_counter)
+ port->sm_churn_partner_timer_counter--;
+
+ /* ---- 2. timer expired, enter CHURN ---- */
+ if (port->sm_churn_actor_state == AD_CHURN_MONITOR &&
+ !port->sm_churn_actor_timer_counter) {
+ port->sm_churn_actor_state = AD_CHURN;
+ port->churn_actor_count++;
}
- if (port->sm_churn_partner_timer_counter &&
- !(--port->sm_churn_partner_timer_counter) &&
- port->sm_churn_partner_state == AD_CHURN_MONITOR) {
- if (port->partner_oper.port_state & LACP_STATE_SYNCHRONIZATION) {
- port->sm_churn_partner_state = AD_NO_CHURN;
- } else {
- port->churn_partner_count++;
- port->sm_churn_partner_state = AD_CHURN;
- }
+
+ if (port->sm_churn_partner_state == AD_CHURN_MONITOR &&
+ !port->sm_churn_partner_timer_counter) {
+ port->sm_churn_partner_state = AD_CHURN;
+ port->churn_partner_count++;
+ }
+
+ /* ---- 3. CHURN_MONITOR/CHURN + sync -> NO_CHURN ---- */
+ if ((port->sm_churn_actor_state == AD_CHURN_MONITOR ||
+ port->sm_churn_actor_state == AD_CHURN) && actor_synced)
+ port->sm_churn_actor_state = AD_NO_CHURN;
+
+ if ((port->sm_churn_partner_state == AD_CHURN_MONITOR ||
+ port->sm_churn_partner_state == AD_CHURN) && partner_synced)
+ port->sm_churn_partner_state = AD_NO_CHURN;
+
+ /* ---- 4. NO_CHURN + !sync -> MONITOR ---- */
+ if (port->sm_churn_actor_state == AD_NO_CHURN && !actor_synced) {
+ port->sm_churn_actor_state = AD_CHURN_MONITOR;
+ port->sm_churn_actor_timer_counter =
+ __ad_timer_to_ticks(AD_ACTOR_CHURN_TIMER, 0);
+ }
+
+ if (port->sm_churn_partner_state == AD_NO_CHURN && !partner_synced) {
+ port->sm_churn_partner_state = AD_CHURN_MONITOR;
+ port->sm_churn_partner_timer_counter =
+ __ad_timer_to_ticks(AD_PARTNER_CHURN_TIMER, 0);
}
}
--
2.50.1 | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Thu, 26 Feb 2026 12:53:29 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | Rename the current LACP priority test to LACP ad_select testing, and
extend it to include validation of the actor state machine and churn
state logic. The updated tests verify that both the mux state machine and
the churn state machine behave correctly under aggregator selection and
failover scenarios.
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
---
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 +++++++++++++++++++
2 files changed, 74 insertions(+), 1 deletion(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
diff --git a/tools/testing/selftests/drivers/net/bonding/Makefile b/tools/testing/selftests/drivers/net/bonding/Makefile
index 6c5c60adb5e8..e7bddfbf0f7a 100644
--- a/tools/testing/selftests/drivers/net/bonding/Makefile
+++ b/tools/testing/selftests/drivers/net/bonding/Makefile
@@ -7,7 +7,7 @@ TEST_PROGS := \
bond-eth-type-change.sh \
bond-lladdr-target.sh \
bond_ipsec_offload.sh \
- bond_lacp_prio.sh \
+ bond_lacp_ad_select.sh \
bond_macvlan_ipvlan.sh \
bond_options.sh \
bond_passive_lacp.sh \
diff --git a/tools/testing/selftests/drivers/net/bonding/bond_lacp_prio.sh b/tools/testing/selftests/drivers/net/bonding/bond_lacp_ad_select.sh
similarity index 64%
rename from tools/testing/selftests/drivers/net/bonding/bond_lacp_prio.sh
rename to tools/testing/selftests/drivers/net/bonding/bond_lacp_ad_select.sh
index a483d505c6a8..9f0b3de4f55c 100755
--- a/tools/testing/selftests/drivers/net/bonding/bond_lacp_prio.sh
+++ b/tools/testing/selftests/drivers/net/bonding/bond_lacp_ad_select.sh
@@ -89,6 +89,65 @@ test_agg_reselect()
RET=1
}
+is_distributing()
+{
+ ip -j -n "$c_ns" -d link show "$1" \
+ | jq -e '.[].linkinfo.info_slave_data.ad_actor_oper_port_state_str | index("distributing")' > /dev/null
+}
+
+get_churn_state()
+{
+ local slave=$1
+ # shellcheck disable=SC2016
+ ip netns exec "$c_ns" awk -v s="$slave" '
+ $0 ~ "Slave Interface: " s {found=1}
+ found && /Actor Churn State:/ { print $4; exit }
+ ' /proc/net/bonding/bond0
+}
+
+check_slave_state()
+{
+ local state=$1
+ local slave_0=$2
+ local slave_1=$3
+ local churn_state
+ RET=0
+
+ s0_agg_id=$(cmd_jq "ip -n ${c_ns} -d -j link show $slave_0" \
+ ".[].linkinfo.info_slave_data.ad_aggregator_id")
+ s1_agg_id=$(cmd_jq "ip -n ${c_ns} -d -j link show $slave_1" \
+ ".[].linkinfo.info_slave_data.ad_aggregator_id")
+ if [ "${s0_agg_id}" -ne "${s1_agg_id}" ]; then
+ log_info "$state: $slave_0 $slave_1 agg ids are different"
+ RET=1
+ fi
+
+ for s in "$slave_0" "$slave_1"; do
+ churn_state=$(get_churn_state "$s")
+ if [ "$state" = "active" ]; then
+ if ! is_distributing "$s"; then
+ log_info "$state: $s is not in distributing state"
+ RET=1
+ fi
+ if [ "$churn_state" != "none" ]; then
+ log_info "$state: $s churn state $churn_state"
+ RET=1
+ fi
+ else
+ # Backup state, should be in churned and not distributing
+ if is_distributing "$s"; then
+ log_info "$state: $s is in distributing state"
+ RET=1
+ fi
+ if [ "$churn_state" != "churned" ]; then
+ log_info "$state: $s churn state $churn_state"
+ # shellcheck disable=SC2034
+ RET=1
+ fi
+ fi
+ done
+}
+
trap cleanup_all_ns EXIT
setup_ns c_ns s_ns b_ns
setup_links
@@ -98,11 +157,25 @@ log_test "bond 802.3ad" "actor_port_prio setting"
test_agg_reselect eth0
log_test "bond 802.3ad" "actor_port_prio select"
+# sleep for a while to make sure the mux state machine has completed.
+sleep 10
+check_slave_state active eth0 eth1
+log_test "bond 802.3ad" "active state/churn checking"
+# wait for churn timer expired, need a bit longer as we restart eth1
+sleep 55
+check_slave_state backup eth2 eth3
+log_test "bond 802.3ad" "backup state/churn checking"
# Change the actor port prio and re-test
ip -n "${c_ns}" link set eth0 type bond_slave actor_port_prio 10
ip -n "${c_ns}" link set eth2 type bond_slave actor_port_prio 1000
test_agg_reselect eth2
log_test "bond 802.3ad" "actor_port_prio switch"
+sleep 10
+check_slave_state active eth2 eth3
+log_test "bond 802.3ad" "active state/churn checking"
+sleep 55
+check_slave_state backup eth0 eth1
+log_test "bond 802.3ad" "backup state/churn checking"
exit "${EXIT_STATUS}"
--
2.50.1 | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Thu, 26 Feb 2026 12:53:30 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | Hangbin Liu <liuhangbin@gmail.com> wrote:
I missed this last time it was posted, but reading it now I
think the functional change looks good, but I question the usefulness of
including the 25 line ASCII art version of the state diagram.
The standard is publicly available, so a comment saying that the
state machine logic conforms to IEEE 802.1AX-2014 figures 6-23 and 6-24
should be sufficient. Anyone seriously checking the code against the
standard will need to read the relevant text, so they'll be looking it
up anyway.
-J
---
-Jay Vosburgh, jv@jvosburgh.net | {
"author": "Jay Vosburgh <jv@jvosburgh.net>",
"date": "Thu, 26 Feb 2026 16:36:46 -0800",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | On Thu, Feb 26, 2026 at 04:36:46PM -0800, Jay Vosburgh wrote:
I added it here to help new readers and reviewers understand the logic
quickly. If you think there’s no need to include it in the code, maybe
we can move it to the commit description?
Thanks
Hangbin | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Fri, 27 Feb 2026 00:52:54 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | Hangbin Liu <liuhangbin@gmail.com> wrote:
I'm not sure I'm seeing the problem here, is there an actual
misbehavior being fixed here? The port is receiving LACPDUs, and from
the receive state machine point of view (Figure 6-18) there's no issue.
The "port_enabled" variable (6.4.7) also informs the state machine
behavior, but that's not the same as what's changed by bonding's
__disable_port function.
Where I'm going with this is that, when multiple aggregator
support was originally implemented, the theory was to keep aggregators
other than the active agg in a state such that they could be put into
service immediately, without having to do LACPDU exchanges in order to
transition into the appropriate state. A hot standby, basically,
analogous to an active-backup mode backup interface with link state up.
I haven't tested this in some time, though, so my question is
whether this change affects the failover time when an active aggregator
is de-selected in favor of another aggregator. By "failover time," I
mean how long transmission and/or reception are interrupted when
changing from one aggregator to another. I presume that if aggregator
failover ater this change requires LACPDU exchanges, etc, it will take
longer to fail over.
-J
---
-Jay Vosburgh, jv@jvosburgh.net | {
"author": "Jay Vosburgh <jv@jvosburgh.net>",
"date": "Thu, 26 Feb 2026 17:16:55 -0800",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | Hangbin Liu <liuhangbin@gmail.com> wrote:
Personally, I'd leave it out, and just put a reference.
The churn machine isn't that critical, even the standards
committee didn't like it and removed it from the 2020 edition of
802.1AX.
Still, whatever we provide should work in accordance with the
2014 standard we're nominally conforming to, so functionally the patch
looks fine to me.
-J
---
-Jay Vosburgh, jv@jvosburgh.net | {
"author": "Jay Vosburgh <jv@jvosburgh.net>",
"date": "Thu, 26 Feb 2026 17:42:13 -0800",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | On Thu, Feb 26, 2026 at 05:16:55PM -0800, Jay Vosburgh wrote:
Yes, the reason I do it here is we select another aggregator and called
__disable_port() for the old one. If we don't update sm_rx_state, the port
will be keep in collecting/distributing state, and the partner will also
keep in the c/d state.
Here we entered a logical paradox, on one hand we want to disable the port,
on the other hand we keep the port in collecting/distributing state.
This sounds good. But without LACPDU exchange, the hot standby actor and
partner should be in collecting/distributing state. What should we do when
partner start send packets to us?
I haven't tested it yet. I think the failover time should be in 1 second.
Let me do some testing today.
Thanks
Hangbin | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Fri, 27 Feb 2026 02:31:05 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | On Thu, Feb 26, 2026 at 05:42:13PM -0800, Jay Vosburgh wrote:
Oh, I didn't notice this. I posted a patch[1] to net-next to export the
churn state via netlink recently. Should I revert this patch?
Got it, I will remove it.
[1] http://lore.kernel.org/netdev/20260224020215.6012-1-liuhangbin@gmail.com
Thanks
Hangbin | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Fri, 27 Feb 2026 02:36:31 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | On Fri, Feb 27, 2026 at 02:31:05AM +0000, Hangbin Liu wrote:
I did a test and the failover takes about 200ms with the environment in patch 03.
Here is the full log
Code: the timer count starts after the old active port link down.
```
ip -n "${c_ns}" link set eth1 down
date +'%F %T.%3N'
ip -n ${c_ns} -d link show eth2
while ! ip -n ${c_ns} -d link show eth2 | grep -q distributing; do
sleep 0.01
done
date +'%F %T.%3N'
ip -n ${c_ns} -d link show eth2
```
Log:
2026-02-26 22:59:54.334 <-- The time when eth1 link down
5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 12:40:54:81:d3:80 brd ff:ff:ff:ff:ff:ff link-netns b_ns-PKIXVg promiscuity 0 allmulti 0 minmtu 68 maxmtu 65535
veth
bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 26:10:46:58:22:e4 queue_id 0 prio 0 ad_aggregator_id 2 ad_actor_oper_port_state 7 ad_actor_oper_port_state_str <active,short_timeout,aggregating> ad_partner_oper_port_state 15 ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync> actor_port_prio 1000 addrgenmode eui64 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 gso_ipv4_max_size 65536 gro_ipv4_max_size 65536
2026-02-26 22:59:54.529 <--- The time when eth2 enter collecting,distributing state
5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 12:40:54:81:d3:80 brd ff:ff:ff:ff:ff:ff link-netns b_ns-PKIXVg promiscuity 0 allmulti 0 minmtu 68 maxmtu 65535
veth
bond_slave state ACTIVE mii_status UP link_failure_count 0 perm_hwaddr 26:10:46:58:22:e4 queue_id 0 prio 0 ad_aggregator_id 2 ad_actor_oper_port_state 63 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state 63 ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 1000 addrgenmode eui64 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 gso_ipv4_max_size 65536 gro_ipv4_max_size 65536
Thanks
Hangbin | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Fri, 27 Feb 2026 04:14:45 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | Hangbin Liu <liuhangbin@gmail.com> wrote:
"disable" the port here really means from bonding's perspective,
so, generally equivalent to the backup interface of an active-backup
mode bond.
Such a backup interface is typically carrier up and able to send
or receive packets. The peer generally won't send packets to the backup
interface, however, as no traffic is sent from the backup, and the MAC
for the bond uses a different interface, so no forwarding entries will
direct to the backup interface.
There are a couple of special cases, like LLDP, that are handled
as an exception, but in general, if a peer does send packets to the
backup interface (due to a switch flood, for example), they're dropped.
Did you mean "should not be in c/d state" above? I.e., without
LACPDU exchage, ... not in c/d state?
Regardless, as above, the situation is generally equivalent to a
backup interface in active-backup mode: incoming traffic that isn't a
special case is dropped. Normal traffic (bearing the bond source MAC)
isn't sent, as that would update the peer's forwarding table.
Nothing in the standard prohibits us from having multiple
aggregators in c/d state simultaneously. A configuration with two
separate bonds, each with interfaces successfully aggregated together
with their respective peers, wherein those two bonds are placed into a
third bond in active-backup mode is essentially the same thing as what
we're discussing.
-J
---
-Jay Vosburgh, jv@jvosburgh.net | {
"author": "Jay Vosburgh <jv@jvosburgh.net>",
"date": "Thu, 26 Feb 2026 20:42:27 -0800",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
lkml_critique | netdev | This series fixes two issues in the bonding 802.3ad implementation
related to port state management and churn detection:
1. When disabling a port, we need to set AD_RX_PORT_DISABLED to ensure
proper state machine transitions, preventing ports from getting stuck
in AD_RX_CURRENT state.
2. The ad_churn_machine implementation is restructured to follow IEEE
802.1AX-2014 specifications correctly. The current implementation has
several issues: it doesn't transition to "none" state immediately when
synchronization is achieved, and can get stuck in churned state in
multi-aggregator scenarios.
3. Selftests are enhanced to validate both mux state machine and churn
state logic under aggregator selection and failover scenarios.
These changes ensure proper LACP state machine behavior and fix issues
where ports could remain in incorrect states during aggregator failover.
v3: re-post to net, no code change (Jakub)
v2:
* set AD_RX_PORT_DISABLED only in ad_agg_selection_logic to avoid side effect. (Paolo Abeni)
* remove actor_churn as it can only be true when the state is ACTOR_CHURN (Paolo Abeni)
* remove AD_PORT_CHURNED since we don't need it anywhere (Paolo Abeni)
* I didn't add new helper for ad_churn_machine() as it looks not help much.
* https://lore.kernel.org/netdev/20260114064921.57686-1-liuhangbin@gmail.com
v1: https://lore.kernel.org/netdev/20251124043310.34073-1-liuhangbin@gmail.com
Hangbin Liu (3):
bonding: set AD_RX_PORT_DISABLED when disabling a port
bonding: restructure ad_churn_machine
selftests: bonding: add mux and churn state testing
drivers/net/bonding/bond_3ad.c | 97 ++++++++++++++-----
.../selftests/drivers/net/bonding/Makefile | 2 +-
...nd_lacp_prio.sh => bond_lacp_ad_select.sh} | 73 ++++++++++++++
3 files changed, 146 insertions(+), 26 deletions(-)
rename tools/testing/selftests/drivers/net/bonding/{bond_lacp_prio.sh => bond_lacp_ad_select.sh} (64%)
--
2.50.1
| null | null | null | [PATCHv3 net 0/3] bonding: fix 802.3ad churn machine and port state issues | On Thu, Feb 26, 2026 at 08:42:27PM -0800, Jay Vosburgh wrote:
Oh, got it.
OK, this makes sense to me.
^^ I mean with LACPDU exchange..
In theory this looks good. But in fact, when we do failover and set the
previous active port to disabled via
- __disable_port(port)
- slave->rx_disabled = 1
This will stop the failover port back to c/d state. For example, in my
testing (see details in patch 03), we have 4 ports, eth0, eth1, eth2, eth3.
eth0 and eth1 are agg1, eth2 and eth3 are agg2. If we do failover on eth1,
when eth1 come up, the final state will be:
3: eth0@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 10
4: eth1@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync> actor_port_prio 255
5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 1000
6: eth3@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 255
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
bond mode 802.3ad actor_port_prio ad_aggregator 2
So you can see the eth0 state is c/d, while eth1 state is active, aggregating.
Do you think it's a correct state?
Thanks
Hangbin | {
"author": "Hangbin Liu <liuhangbin@gmail.com>",
"date": "Fri, 27 Feb 2026 06:21:12 +0000",
"is_openbsd": false,
"thread_id": "aaE32DlfrX9S5KNT@fedora.mbox.gz"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.