source
large_stringclasses 2
values | subject
large_stringclasses 112
values | code
large_stringclasses 112
values | critique
large_stringlengths 61
3.04M
⌀ | metadata
dict |
|---|---|---|---|---|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Hi Roman,
On Mon, Jan 26, 2026 at 6:50 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
[snip]
I was worried about concurrency with cgroup ops until I saw
cgroup_bpf_detach_struct_ops() takes cgroup_lock() internally (since
you take it inline sometimes below I falsely assumed it wasn't
present). In any case, I'm wondering why you need to pass in the
cgroup pointer to cgroup_bpf_detach_struct_ops() at all, rather than
just the link?
We have to be careful at this point. cgroup release could now occur
concurrently which would clear link->cgroup. Maybe worth a comment
here since this is a bit subtle.
|
{
"author": "Josh Don <joshdon@google.com>",
"date": "Tue, 27 Jan 2026 19:10:35 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Thanks Roman!
On Mon, Jan 26, 2026 at 6:51 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
If bpf claims to have freed memory but didn't actually do so, that
seems like something potentially worth alerting to. Perhaps something
to add to the oom header output?
|
{
"author": "Josh Don <joshdon@google.com>",
"date": "Tue, 27 Jan 2026 19:26:57 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Tue 27-01-26 21:12:56, Roman Gushchin wrote:
Could you explain this a bit more. This must be some BPF magic because
they are getting a standard pointer to oom_control.
Yes, something like OOM_BACKOFF, OOM_PROCESSED, OOM_FAILED.
Counters usually do not work very well for async operations. In this
case there is oom_repaer and/or task exit to finish the oom operation.
The former is bound and guaranteed to make a forward progress but there
is no time frame to assume when that happens as it depends on how many
tasks might be queued (usually a single one but this is not something to
rely on because of concurrent ooms in memcgs and also multiple tasks
could be killed at the same time).
Another complication is that there are multiple levels of OOM to track
(global, NUMA, memcg) so any watchdog would have to be aware of that as
well. I am really wondering whether we really need to be so careful with
handlers. It is not like you would allow any random oom handler to be
loaded, right? Would it make sense to start without this protection and
converge to something as we see how this evolves? Maybe this will raise
the bar for oom handlers as the price for bugs is going to be really
high.
Cool!
--
Michal Hocko
SUSE Labs
|
{
"author": "Michal Hocko <mhocko@suse.com>",
"date": "Wed, 28 Jan 2026 09:00:45 +0100",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Tue 27-01-26 21:01:48, Roman Gushchin wrote:
Sure. Essentially an expected structure of the handler. What is the API
it can use, what is has to do and what it must not do. Essentially a
single place you can read and get enough information to start developing
your oom handler.
Examples are really great but having a central place to document
available API is much more helpful IMHO. The generally scattered nature
of BPF hooks makes it really hard to even know what is available to oom
handlers to use.
It certainly makes sense to have trusted implementation of a commonly
requested oom policy that we couldn't implement due to specific nature
that doesn't really apply to many users. And have that in the tree. I am
not thrilled about auto-loading because this could be easily done by a
simple tooling.
--
Michal Hocko
SUSE Labs
|
{
"author": "Michal Hocko <mhocko@suse.com>",
"date": "Wed, 28 Jan 2026 09:06:14 +0100",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Once additional point I forgot to mention previously
On Mon 26-01-26 18:44:10, Roman Gushchin wrote:
Should this check for is_sysrq_oom and always use the in kernel OOM
handling for Sysrq triggered ooms as a failsafe measure?
--
Michal Hocko
SUSE Labs
|
{
"author": "Michal Hocko <mhocko@suse.com>",
"date": "Wed, 28 Jan 2026 12:19:42 +0100",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:05PM -0800, Roman Gushchin wrote:
Assigning 0 to cgrp_id would technically be incorrect, right? Like,
cgroup_id() for !CONFIG_CGROUPS default to returning 1, and for
CONFIG_CGROUPS the ID allocation is done via the idr_alloc_cyclic()
API using a range between 1 and INT_MAX. Perhaps here it serves as a
valid sentinel value? Is that the rationale?
In general, shouldn't all the cgroup related logic within this source
file be protected by a CONFIG_CGROUPS ifdef? For example, both
cgroup_get_from_fd() and cgroup_put() lack stubs when building with
!CONFIG_CGROUPS.
Probably could introduce a simple inline helper for the
cgroup_lock()/cgroup_id()/cgroup_unlock() dance that's going on in
here and bpf_struct_ops_map_link_fill_link_info() below.
As mentioned above a simple inline helper could simply yield the
following here:
...
info->struct_ops.cgroup_id = bpf_struct_ops_lin_cgroup_id();
...
BPF_F_CGROUP_FD is dependent on the cgroup subsystem, therefore it
probably makes some sense to only accept BPF_F_CGROUP_FD when
CONFIG_BPF_CGROUP is enabled, otherwise -EOPNOTSUPP?
I'd also probably rewrite this such that we do:
...
struct cgroup *cgrp = NULL;
...
if (attr->link_create.flags & ~BPF_F_CGROUP_FD) {
#if IS_ENABLED(CONFIG_CGROUP_BPF)
cgrp = cgroup_get_from_fd(attr->link_create.target_fd);
if (IS_ERR(cgrp))
return PTR_ERR(cgrp);
#else
return -EOPNOTSUPP;
#endif
}
...
if (cgrp) {
link->cgroup = cgrp;
if (cgroup_bpf_attach_struct_ops(cgrp, link)) {
cgroup_put(cgrp);
goto err_out;
}
}
IMO the code is cleaner and reads better too.
If the cgroup is dying, then perhaps -EINVAL would be more appropriate
here, no? I'd argue that -EBUSY implies a temporary or transient
state.
Within cgroup_bpf_attach_struct_ops() and
cgroup_bpf_detach_struct_ops() the cgrp pointer appears to be
superfluous? Both should probably only operate on link->cgroup
instead? A !link->cgroup when calling either should be considered as
-EINVAL.
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Wed, 28 Jan 2026 11:25:31 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:04PM -0800, Roman Gushchin wrote:
Looks OK to me:
Acked-by: Matt Bobrowski <mattbobrowski@google.com>
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Wed, 28 Jan 2026 11:28:48 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Wed, Jan 28, 2026 at 12:06 AM Michal Hocko <mhocko@suse.com> wrote:
Production ready bpf-oom program(s) must be part of this set.
We've seen enough attempts to add bpf st_ops in various parts of
the kernel without providing realistic bpf progs that will drive
those hooks. It's great to have flexibility and people need
to have a freedom to develop their own bpf-oom policy, but
the author of the patch set who's advocating for the new
bpf hooks must provide their real production progs and
share their real use case with the community.
It's not cool to hide it.
In that sense enabling auto-loading without requiring an end user
to install the toolchain and build bpf programs/rust/whatnot
is necessary too.
bpf-oom can be a self contained part of vmlinux binary.
We already have a mechanism to do that.
This way the end user doesn't need to be a bpf expert, doesn't need
to install clang, build the tools, etc.
They can just enable fancy new bpf-oom policy and see whether
it's helping their apps or not while knowing nothing about bpf.
|
{
"author": "Alexei Starovoitov <alexei.starovoitov@gmail.com>",
"date": "Wed, 28 Jan 2026 08:59:34 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Alexei Starovoitov <alexei.starovoitov@gmail.com> writes:
In my case it's not about hiding, it's a chicken and egg problem:
the upstream first model contradicts with the idea to include the
production results into the patchset. In other words, I want to settle
down the interface before shipping something to prod.
I guess the compromise here is to initially include a bpf oom policy
inspired by what systemd-oomd does and what is proven to work for a
broad range of users. Policies suited for large datacenters can be
added later, but also their generic usefulness might be limited by the
need of proprietary userspace orchestration engines.
Fully agree here. Will implement in v4.
Thanks!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Wed, 28 Jan 2026 10:23:34 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Michal Hocko <mhocko@suse.com> writes:
Yes, but bpf programs (unlike kernel modules) are going through the
verifier when being loaded to the kernel. The verifier ensures that
programs are safe: e.g. they can't access memory outside of safe areas,
they can't can infinite loops, dereference a NULL pointer etc.
So even it looks like a normal argument, it's read only. And the program
can't even read the memory outside of the structure itself, e.g. a
program doing something like (oc + 1)->bpf_memory_freed won't be allowed
to load.
Yeah, it has to be an atomic counter attached to the bpf oom "instance":
a policy attached to a specific cgroup or system-wide.
Right, bpf programs require CAP_SYSADMIN to be loaded.
I still would prefer to keep it 100% safe, but the more I think about it
the more I agree with you: likely limitations of the protection mechanism will
create more issues than the value of the protection itself.
Thank you!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Wed, 28 Jan 2026 10:44:46 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Josh Don <joshdon@google.com> writes:
Hi Josh!
Sure, good point.
Agree, will add.
Thanks!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Wed, 28 Jan 2026 10:52:05 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Michal Hocko <mhocko@suse.com> writes:
Yep, good point. Will implement in v4.
Thanks!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Wed, 28 Jan 2026 10:53:20 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Wed, Jan 28, 2026 at 10:23 AM Roman Gushchin
<roman.gushchin@linux.dev> wrote:
Works for me.
Agree. That's the flexibility part that makes the whole thing worth
while and the reason to do such oom policy as bpf progs.
But something tangible and useful needs to be there from day one.
systmed-oomd-like sounds very reasonable.
|
{
"author": "Alexei Starovoitov <alexei.starovoitov@gmail.com>",
"date": "Wed, 28 Jan 2026 10:53:50 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Josh Don <joshdon@google.com> writes:
Michal pointed at a more fundamental problem: if a bpf handler performed
some actions (e.g. killed a program), how to safely allow other bpf
handlers to exit without performing redundant destructive operations?
Now it works on marking victim processes, so that subsequent kernel
oom handlers just bail out if they see a marked process.
I don't know to extend it to generic actions. E.g. we can have an atomic
counter attached to the bpf oom instance (link), we can bump it on
performing a destructive operation, but it's not clear when to clear it.
So maybe it's not worth it at all and it's better to drop this
protection mechanism altogether.
Thanks!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Wed, 28 Jan 2026 11:03:16 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Matt Bobrowski <mattbobrowski@google.com> writes:
Yes. Idk, maybe (u64)-1 works better here, I don't have a strong
opinion. Realistically I doubt there are too many bpf users with
!CONFIG_CGROUPS. Alexei even suggested in the past to make CONFIG_MEMCG
mandatory, which implies CONFIG_CGROUPS.
I'll try, thanks!
Idk, I thought about it and settled on -EBUSY to highlight the
transient nature of the issue. ENOENT is another option.
I don't really think EINVAL is the best choice here.
Ack.
Thank you for the review!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Wed, 28 Jan 2026 11:18:36 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:12PM -0800, Roman Gushchin wrote:
If contended and we end up waiting here, some forward progress could
have been made in the interim. Enough such that this pending OOM event
initiated by the call into bpf_out_of_memory() may no longer even be
warranted. What do you think about adding an escape hatch here, which
could simply be in the form of a user-defined function callback?
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Wed, 28 Jan 2026 20:21:54 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On 1/26/26 6:44 PM, Roman Gushchin wrote:
[ ... ]
iiuc, this will allow only one oom_ops to be attached to a cgroup.
Considering oom_ops is the only user of the cgrp->bpf.struct_ops_links
(added in patch 2), the list should have only one element for now.
Copy some context from the patch 2 commit log.
> This change doesn't answer the question how bpf programs belonging
> to these struct ops'es will be executed. It will be done individually
> for every bpf struct ops which supports this.
>
> Please, note that unlike "normal" bpf programs, struct ops'es
> are not propagated to cgroup sub-trees.
There are NONE, BPF_F_ALLOW_OVERRIDE, and BPF_F_ALLOW_MULTI, which one
may be closer to the bpf_handle_oom() semantic. If it needs to change
the ordering (or allow multi) in the future, does it need a new flag or
the existing BPF_F_xxx flags can be used.
|
{
"author": "Martin KaFai Lau <martin.lau@linux.dev>",
"date": "Thu, 29 Jan 2026 13:00:11 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
Martin KaFai Lau <martin.lau@linux.dev> writes:
Hi Martin!
Sorry, I'm not quite sure what do you mean, can you please elaborate
more?
We decided (in conversations at LPC) that 1 bpf oom policy for
memcg is good for now (with a potential to extend in the future, if
there will be use cases). But it seems like there is a lot of interest
to attach struct ops'es to cgroups (there are already a couple of
patchsets posted based on my earlier v2 patches), so I tried to make the
bpf link mechanics suitable for multiple use cases from scratch.
Did I answer your question?
I hope that existing flags can be used, but also I'm not sure we ever
would need multiple oom handlers per cgroup. Do you have any specific
concerns here?
Thanks!
|
{
"author": "Roman Gushchin <roman.gushchin@linux.dev>",
"date": "Fri, 30 Jan 2026 15:29:31 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Wed, Jan 28, 2026 at 08:59:34AM -0800, Alexei Starovoitov wrote:
For the auto-loading capability you speak of here, I'm currently
interpreting it as being some form of conceptually similar extension
to the BPF preload functionality. Have I understood this correctly? If
so, I feel as though something like this would be a completely
independent stream of work, orthogonal to this BPF OOM feature, right?
Or, is that you'd like this new auto-loading capability completed as a
hard prerequisite before pulling in the BPF OOM feature?
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Mon, 2 Feb 2026 03:26:07 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:09PM -0800, Roman Gushchin wrote:
This code has been changed in the mm-tree and you can directly use
mem_cgroup_get_from_id() after changes in the mm-tree.
|
{
"author": "Shakeel Butt <shakeel.butt@linux.dev>",
"date": "Sun, 1 Feb 2026 19:50:48 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Tue, Jan 27, 2026 at 09:12:56PM +0000, Roman Gushchin wrote:
Yes, please, this is something that I had mentioned to you the other
day too. With this kind of BPF kfunc, we'll basically be able to
handle memcg scoped OOM events inline without necessarily being forced
to kill off anything.
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Mon, 2 Feb 2026 04:06:32 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:11PM -0800, Roman Gushchin wrote:
task->signal->oom_score_adj == OOM_SCORE_ADJ_MIN is also
representative of an unkillable task, so why not fold this up into the
above conditional? Also, why not bother checking states like
mm_flags_test(MMF_OOM_SKIP, task->mm) and in_vfork() here too?
In all fairness I'm a little surprised about constraints like
task->signal->oom_score_adj == OOM_SCORE_ADJ_MIN being enforced
here. You could argue that the whole purpose of BPF OOM is such that
you can implement your own victim selection algorithms entirely in BPF
using your own set of heuristics and what not without needing to
strictly respect properties like oom_score_adj.
In any case, I think we should at least clearly document such
constraints.
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Mon, 2 Feb 2026 04:49:09 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:08PM -0800, Roman Gushchin wrote:
This is fine. Feel free to add:
Acked-by: Matt Bobrowski <mattbobrowski@google.com>
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Mon, 2 Feb 2026 04:56:57 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Mon, Jan 26, 2026 at 06:44:13PM -0800, Roman Gushchin wrote:
Why not just do a direct memory read (i.e., task->signal->oom_mm)
within the BPF program? I'm not quite convinced that a BPF kfunc
wrapper for something like tsk_is_oom_victim() is warranted as you can
literally achieve the same semantics without one.
|
{
"author": "Matt Bobrowski <mattbobrowski@google.com>",
"date": "Mon, 2 Feb 2026 05:39:48 +0000",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Sun, Feb 1, 2026 at 9:39 PM Matt Bobrowski <mattbobrowski@google.com> wrote:
+1
there is no need for this kfunc.
|
{
"author": "Alexei Starovoitov <alexei.starovoitov@gmail.com>",
"date": "Mon, 2 Feb 2026 09:30:23 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH bpf-next v3 00/17] mm: BPF OOM
|
This patchset adds an ability to customize the out of memory
handling using bpf.
It focuses on two parts:
1) OOM handling policy,
2) PSI-based OOM invocation.
The idea to use bpf for customizing the OOM handling is not new, but
unlike the previous proposal [1], which augmented the existing task
ranking policy, this one tries to be as generic as possible and
leverage the full power of the modern bpf.
It provides a generic interface which is called before the existing OOM
killer code and allows implementing any policy, e.g. picking a victim
task or memory cgroup or potentially even releasing memory in other
ways, e.g. deleting tmpfs files (the last one might require some
additional but relatively simple changes).
The past attempt to implement memory-cgroup aware policy [2] showed
that there are multiple opinions on what the best policy is. As it's
highly workload-dependent and specific to a concrete way of organizing
workloads, the structure of the cgroup tree etc, a customizable
bpf-based implementation is preferable over an in-kernel implementation
with a dozen of sysctls.
The second part is related to the fundamental question on when to
declare the OOM event. It's a trade-off between the risk of
unnecessary OOM kills and associated work losses and the risk of
infinite trashing and effective soft lockups. In the last few years
several PSI-based userspace solutions were developed (e.g. OOMd [3] or
systemd-OOMd [4]). The common idea was to use userspace daemons to
implement custom OOM logic as well as rely on PSI monitoring to avoid
stalls. In this scenario the userspace daemon was supposed to handle
the majority of OOMs, while the in-kernel OOM killer worked as the
last resort measure to guarantee that the system would never deadlock
on the memory. But this approach creates additional infrastructure
churn: userspace OOM daemon is a separate entity which needs to be
deployed, updated, monitored. A completely different pipeline needs to
be built to monitor both types of OOM events and collect associated
logs. A userspace daemon is more restricted in terms on what data is
available to it. Implementing a daemon which can work reliably under a
heavy memory pressure in the system is also tricky.
This patchset includes the code, tests and many ideas from the patchset
of JP Kobryn, which implemented bpf kfuncs to provide a faster method
to access memcg data [5].
[1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/
[2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/
[3]: https://github.com/facebookincubator/oomd
[4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[5]: https://lkml.org/lkml/2025/10/15/1554
---
v3:
1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.)
2) Updated bpf_oom struct ops:
- removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.)
- removed handle_cgroup_offline callback.
3) Updated kfuncs:
- bpf_out_of_memory() dropped constraint_text argument (by Michal H.)
- bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN.
4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.)
v2:
1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg.
(by Alexei Starovoitov)
2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau,
Andrii Nakryiko and others)
3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn)
4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan,
Andrii Nakryiko and Kumar Kartikeya Dwivedi)
5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock
(suggested by Kumar Kartikeya Dwivedi)
6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi)
7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom
v1:
1) Both OOM and PSI parts are now implemented using bpf struct ops,
providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi,
Song Liu and Matt Bobrowski)
2) It's possible to create PSI triggers from BPF, no need for an additional
userspace agent. (suggested by Suren Baghdasaryan)
Also there is now a callback for the cgroup release event.
3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko)
4) Added bpf_task_is_oom_victim (suggested by Michal Hocko)
5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan)
RFC:
https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/
JP Kobryn (1):
bpf: selftests: add config for psi
Roman Gushchin (16):
bpf: move bpf_struct_ops_link into bpf.h
bpf: allow attaching struct_ops to cgroups
libbpf: fix return value on memory allocation failure
libbpf: introduce bpf_map__attach_struct_ops_opts()
bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL
mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG
mm: introduce BPF OOM struct ops
mm: introduce bpf_oom_kill_process() bpf kfunc
mm: introduce bpf_out_of_memory() BPF kfunc
mm: introduce bpf_task_is_oom_victim() kfunc
bpf: selftests: introduce read_cgroup_file() helper
bpf: selftests: BPF OOM struct ops test
sched: psi: add a trace point to psi_avgs_work()
sched: psi: add cgroup_id field to psi_group structure
bpf: allow calling bpf_out_of_memory() from a PSI tracepoint
bpf: selftests: PSI struct ops test
MAINTAINERS | 2 +
include/linux/bpf-cgroup-defs.h | 6 +
include/linux/bpf-cgroup.h | 16 ++
include/linux/bpf.h | 10 +
include/linux/bpf_oom.h | 46 ++++
include/linux/memcontrol.h | 4 +-
include/linux/oom.h | 13 +
include/linux/psi_types.h | 4 +
include/trace/events/psi.h | 27 ++
include/uapi/linux/bpf.h | 3 +
kernel/bpf/bpf_struct_ops.c | 77 +++++-
kernel/bpf/cgroup.c | 46 ++++
kernel/bpf/verifier.c | 5 +
kernel/sched/psi.c | 7 +
mm/Makefile | 2 +-
mm/bpf_oom.c | 192 +++++++++++++
mm/memcontrol.c | 2 -
mm/oom_kill.c | 202 ++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 22 +-
tools/lib/bpf/libbpf.h | 14 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++
tools/testing/selftests/bpf/cgroup_helpers.h | 3 +
tools/testing/selftests/bpf/config | 1 +
.../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++
.../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++
tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++
tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++
29 files changed, 1412 insertions(+), 21 deletions(-)
create mode 100644 include/linux/bpf_oom.h
create mode 100644 include/trace/events/psi.h
create mode 100644 mm/bpf_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c
create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c
create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c
--
2.52.0
|
On Sun, Feb 1, 2026 at 7:26 PM Matt Bobrowski <mattbobrowski@google.com> wrote:
It's not a hard prerequisite, but it has to be thought through.
bpf side is ready today. bpf preload is an example of it.
The oom side needs to design an interface to do it.
sysctl to enable builtin bpf-oom policy is probably too rigid.
Maybe a file in cgroupfs? Writing a name of bpf-oom policy would
trigger load and attach to that cgroup.
Or you can plug it exactly like bpf preload:
when bpffs is mounted all builtin bpf progs get loaded and create
".debug" files in bpffs.
I recall we discussed an ability to create files in bpffs from
tracepoints. This way bpffs can replicate cgroupfs directory
structure without user space involvement. New cgroup -> new directory
in cgroupfs -> tracepoint -> bpf prog -> new directory in bpffs
-> create "enable_bpf_oom.debug" file in there.
Writing to that file we trigger bpf prog that will attach bpf-oom
prog to that cgroup.
Could be any combination of the above or something else,
but needs to be designed and agreed upon.
Otherwise, I'm afraid, we will have bpf-oom progs in selftests
and users who want to experiment with it would need kernel source
code, clang, etc to try it. We need to lower the barrier to use it.
|
{
"author": "Alexei Starovoitov <alexei.starovoitov@gmail.com>",
"date": "Mon, 2 Feb 2026 09:50:05 -0800",
"thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The "qup-memory" interconnect path is optional and may not be defined
in all device trees. Unroll the loop-based ICC path initialization to
allow specific error handling for each path type.
The "qup-core" and "qup-config" paths remain mandatory and will fail
probe if missing, while "qup-memory" is now handled as optional and
skipped when not present in the device tree.
Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
Bjorn:
- Updated commit text.
- Used local variable for more readable.
---
drivers/soc/qcom/qcom-geni-se.c | 36 +++++++++++++++++----------------
1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index cd1779b6a91a..b6167b968ef6 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -899,30 +899,32 @@ EXPORT_SYMBOL_GPL(geni_se_rx_dma_unprep);
int geni_icc_get(struct geni_se *se, const char *icc_ddr)
{
- int i, err;
- const char *icc_names[] = {"qup-core", "qup-config", icc_ddr};
+ struct geni_icc_path *icc_paths = se->icc_paths;
if (has_acpi_companion(se->dev))
return 0;
- for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) {
- if (!icc_names[i])
- continue;
-
- se->icc_paths[i].path = devm_of_icc_get(se->dev, icc_names[i]);
- if (IS_ERR(se->icc_paths[i].path))
- goto err;
+ icc_paths[GENI_TO_CORE].path = devm_of_icc_get(se->dev, "qup-core");
+ if (IS_ERR(icc_paths[GENI_TO_CORE].path))
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_CORE].path),
+ "Failed to get 'qup-core' ICC path\n");
+
+ icc_paths[CPU_TO_GENI].path = devm_of_icc_get(se->dev, "qup-config");
+ if (IS_ERR(icc_paths[CPU_TO_GENI].path))
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[CPU_TO_GENI].path),
+ "Failed to get 'qup-config' ICC path\n");
+
+ /* The DDR path is optional, depending on protocol and hw capabilities */
+ icc_paths[GENI_TO_DDR].path = devm_of_icc_get(se->dev, "qup-memory");
+ if (IS_ERR(icc_paths[GENI_TO_DDR].path)) {
+ if (PTR_ERR(icc_paths[GENI_TO_DDR].path) == -ENODATA)
+ icc_paths[GENI_TO_DDR].path = NULL;
+ else
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_DDR].path),
+ "Failed to get 'qup-memory' ICC path\n");
}
return 0;
-
-err:
- err = PTR_ERR(se->icc_paths[i].path);
- if (err != -EPROBE_DEFER)
- dev_err_ratelimited(se->dev, "Failed to get ICC path '%s': %d\n",
- icc_names[i], err);
- return err;
-
}
EXPORT_SYMBOL_GPL(geni_icc_get);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:11 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Add a new function geni_icc_set_bw_ab() that allows callers to set
average bandwidth values for all ICC (Interconnect) paths in a single
call. This function takes separate parameters for core, config, and DDR
average bandwidth values and applies them to the respective ICC paths.
This provides a more convenient API for drivers that need to configure
specific average bandwidth values.
Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 22 ++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 1 +
2 files changed, 23 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b6167b968ef6..b0542f836453 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -946,6 +946,28 @@ int geni_icc_set_bw(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_set_bw);
+/**
+ * geni_icc_set_bw_ab() - Set average bandwidth for all ICC paths and apply
+ * @se: Pointer to the concerned serial engine.
+ * @core_ab: Average bandwidth in kBps for GENI_TO_CORE path.
+ * @cfg_ab: Average bandwidth in kBps for CPU_TO_GENI path.
+ * @ddr_ab: Average bandwidth in kBps for GENI_TO_DDR path.
+ *
+ * Sets bandwidth values for all ICC paths and applies them. DDR path is
+ * optional and only set if it exists.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab)
+{
+ se->icc_paths[GENI_TO_CORE].avg_bw = core_ab;
+ se->icc_paths[CPU_TO_GENI].avg_bw = cfg_ab;
+ se->icc_paths[GENI_TO_DDR].avg_bw = ddr_ab;
+
+ return geni_icc_set_bw(se);
+}
+EXPORT_SYMBOL_GPL(geni_icc_set_bw_ab);
+
void geni_icc_set_tag(struct geni_se *se, u32 tag)
{
int i;
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 0a984e2579fe..980aabea2157 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -528,6 +528,7 @@ void geni_se_rx_dma_unprep(struct geni_se *se, dma_addr_t iova, size_t len);
int geni_icc_get(struct geni_se *se, const char *icc_ddr);
int geni_icc_set_bw(struct geni_se *se);
+int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab);
void geni_icc_set_tag(struct geni_se *se, u32 tag);
int geni_icc_enable(struct geni_se *se);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:12 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently duplicate
code for initializing shared resources such as clocks and interconnect
paths.
Introduce a new helper API, geni_se_resources_init(), to centralize this
initialization logic, improving modularity and simplifying the probe
function.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1 -> v2:
- Updated proper return value for devm_pm_opp_set_clkname()
---
drivers/soc/qcom/qcom-geni-se.c | 47 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 6 ++++
2 files changed, 53 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b0542f836453..75e722cd1a94 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
/**
@@ -1012,6 +1013,52 @@ int geni_icc_disable(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_disable);
+/**
+ * geni_se_resources_init() - Initialize resources for a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function initializes various resources required by the GENI Serial Engine
+ * (SE) device, including clock resources (core and SE clocks), interconnect
+ * paths for communication.
+ * It retrieves optional and mandatory clock resources, adds an OF-based
+ * operating performance point (OPP) table, and sets up interconnect paths
+ * with default bandwidths. The function also sets a flag (`has_opp`) to
+ * indicate whether OPP support is available for the device.
+ *
+ * Return: 0 on success, or a negative errno on failure.
+ */
+int geni_se_resources_init(struct geni_se *se)
+{
+ int ret;
+
+ se->core_clk = devm_clk_get_optional(se->dev, "core");
+ if (IS_ERR(se->core_clk))
+ return dev_err_probe(se->dev, PTR_ERR(se->core_clk),
+ "Failed to get optional core clk\n");
+
+ se->clk = devm_clk_get(se->dev, "se");
+ if (IS_ERR(se->clk) && !has_acpi_companion(se->dev))
+ return dev_err_probe(se->dev, PTR_ERR(se->clk),
+ "Failed to get SE clk\n");
+
+ ret = devm_pm_opp_set_clkname(se->dev, "se");
+ if (ret)
+ return ret;
+
+ ret = devm_pm_opp_of_add_table(se->dev);
+ if (ret && ret != -ENODEV)
+ return dev_err_probe(se->dev, ret, "Failed to add OPP table\n");
+
+ se->has_opp = (ret == 0);
+
+ ret = geni_icc_get(se, "qup-memory");
+ if (ret)
+ return ret;
+
+ return geni_icc_set_bw_ab(se, GENI_DEFAULT_BW, GENI_DEFAULT_BW, GENI_DEFAULT_BW);
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_init);
+
/**
* geni_find_protocol_fw() - Locate and validate SE firmware for a protocol.
* @dev: Pointer to the device structure.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 980aabea2157..c182dd0f0bde 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -60,18 +60,22 @@ struct geni_icc_path {
* @dev: Pointer to the Serial Engine device
* @wrapper: Pointer to the parent QUP Wrapper core
* @clk: Handle to the core serial engine clock
+ * @core_clk: Auxiliary clock, which may be required by a protocol
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @has_opp: Indicates if OPP is supported
*/
struct geni_se {
void __iomem *base;
struct device *dev;
struct geni_wrapper *wrapper;
struct clk *clk;
+ struct clk *core_clk;
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ bool has_opp;
};
/* Common SE registers */
@@ -535,6 +539,8 @@ int geni_icc_enable(struct geni_se *se);
int geni_icc_disable(struct geni_se *se);
+int geni_se_resources_init(struct geni_se *se);
+
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:13 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI SE protocol drivers (I2C, SPI, UART) implement similar resource
activation/deactivation sequences independently, leading to code
duplication.
Introduce geni_se_resources_activate()/geni_se_resources_deactivate() to
power on/off resources.The activate function enables ICC, clocks, and TLMM
whereas the deactivate function disables resources in reverse order
including OPP rate reset, clocks, ICC and TLMM.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v2 -> v3
- Added export symbol for new APIs.
v1 -> v2
Bjorn
- Updated commit message based code changes.
- Removed geni_se_resource_state() API.
- Utilized code snippet from geni_se_resources_off()
---
drivers/soc/qcom/qcom-geni-se.c | 79 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++
2 files changed, 83 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 75e722cd1a94..3341bc98df09 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -1013,6 +1013,85 @@ int geni_icc_disable(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_disable);
+/**
+ * geni_se_resources_deactivate() - Deactivate GENI SE device resources
+ * @se: Pointer to the geni_se structure
+ *
+ * Deactivates device resources for power saving: OPP rate to 0, pin control
+ * to sleep state, turns off clocks, and disables interconnect. Skips ACPI devices.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int geni_se_resources_deactivate(struct geni_se *se)
+{
+ int ret;
+
+ if (has_acpi_companion(se->dev))
+ return 0;
+
+ if (se->has_opp)
+ dev_pm_opp_set_rate(se->dev, 0);
+
+ ret = pinctrl_pm_select_sleep_state(se->dev);
+ if (ret)
+ return ret;
+
+ geni_se_clks_off(se);
+
+ if (se->core_clk)
+ clk_disable_unprepare(se->core_clk);
+
+ return geni_icc_disable(se);
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_deactivate);
+
+/**
+ * geni_se_resources_activate() - Activate GENI SE device resources
+ * @se: Pointer to the geni_se structure
+ *
+ * Activates device resources for operation: enables interconnect, prepares clocks,
+ * and sets pin control to default state. Includes error cleanup. Skips ACPI devices.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int geni_se_resources_activate(struct geni_se *se)
+{
+ int ret;
+
+ if (has_acpi_companion(se->dev))
+ return 0;
+
+ ret = geni_icc_enable(se);
+ if (ret)
+ return ret;
+
+ if (se->core_clk) {
+ ret = clk_prepare_enable(se->core_clk);
+ if (ret)
+ goto out_icc_disable;
+ }
+
+ ret = geni_se_clks_on(se);
+ if (ret)
+ goto out_clk_disable;
+
+ ret = pinctrl_pm_select_default_state(se->dev);
+ if (ret) {
+ geni_se_clks_off(se);
+ goto out_clk_disable;
+ }
+
+ return ret;
+
+out_clk_disable:
+ if (se->core_clk)
+ clk_disable_unprepare(se->core_clk);
+out_icc_disable:
+ geni_icc_disable(se);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index c182dd0f0bde..36a68149345c 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -541,6 +541,10 @@ int geni_icc_disable(struct geni_se *se);
int geni_se_resources_init(struct geni_se *se);
+int geni_se_resources_activate(struct geni_se *se);
+
+int geni_se_resources_deactivate(struct geni_se *se);
+
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:14 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently handle
the attachment of power domains. This often leads to duplicated code
logic across different driver probe functions.
Introduce a new helper API, geni_se_domain_attach(), to centralize
the logic for attaching "power" and "perf" domains to the GENI SE
device.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 29 +++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++++
2 files changed, 33 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 3341bc98df09..b8e5066d4881 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
@@ -1092,6 +1093,34 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_domain_attach() - Attach power domains to a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function attaches the necessary power domains ("power" and "perf")
+ * to the GENI Serial Engine device. It initializes `se->pd_list` with the
+ * attached domains.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_domain_attach(struct geni_se *se)
+{
+ struct dev_pm_domain_attach_data pd_data = {
+ .pd_flags = PD_FLAG_DEV_LINK_ON,
+ .pd_names = (const char*[]) { "power", "perf" },
+ .num_pd_names = 2,
+ };
+ int ret;
+
+ ret = dev_pm_domain_attach_list(se->dev,
+ &pd_data, &se->pd_list);
+ if (ret <= 0)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(geni_se_domain_attach);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 36a68149345c..5f75159c5531 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -64,6 +64,7 @@ struct geni_icc_path {
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @pd_list: Power domain list for managing power domains
* @has_opp: Indicates if OPP is supported
*/
struct geni_se {
@@ -75,6 +76,7 @@ struct geni_se {
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ struct dev_pm_domain_list *pd_list;
bool has_opp;
};
@@ -546,5 +548,7 @@ int geni_se_resources_activate(struct geni_se *se);
int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
+
+int geni_se_domain_attach(struct geni_se *se);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:15 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI Serial Engine (SE) drivers (I2C, SPI, and SERIAL) currently
manage performance levels and operating points directly. This resulting
in code duplication across drivers. such as configuring a specific level
or find and apply an OPP based on a clock frequency.
Introduce two new helper APIs, geni_se_set_perf_level() and
geni_se_set_perf_opp(), addresses this issue by providing a streamlined
method for the GENI Serial Engine (SE) drivers to find and set the OPP
based on the desired performance level, thereby eliminating redundancy.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 50 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 +++
2 files changed, 54 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b8e5066d4881..dc5f5bb52915 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -282,6 +282,12 @@ struct se_fw_hdr {
#define geni_setbits32(_addr, _v) writel(readl(_addr) | (_v), _addr)
#define geni_clrbits32(_addr, _v) writel(readl(_addr) & ~(_v), _addr)
+enum domain_idx {
+ DOMAIN_IDX_POWER,
+ DOMAIN_IDX_PERF,
+ DOMAIN_IDX_MAX
+};
+
/**
* geni_se_get_qup_hw_version() - Read the QUP wrapper Hardware version
* @se: Pointer to the corresponding serial engine.
@@ -1093,6 +1099,50 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_set_perf_level() - Set performance level for GENI SE.
+ * @se: Pointer to the struct geni_se instance.
+ * @level: The desired performance level.
+ *
+ * Sets the performance level by directly calling dev_pm_opp_set_level
+ * on the performance device associated with the SE.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level)
+{
+ return dev_pm_opp_set_level(se->pd_list->pd_devs[DOMAIN_IDX_PERF], level);
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_level);
+
+/**
+ * geni_se_set_perf_opp() - Set performance OPP for GENI SE by frequency.
+ * @se: Pointer to the struct geni_se instance.
+ * @clk_freq: The requested clock frequency.
+ *
+ * Finds the nearest operating performance point (OPP) for the given
+ * clock frequency and applies it to the SE's performance device.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq)
+{
+ struct device *perf_dev = se->pd_list->pd_devs[DOMAIN_IDX_PERF];
+ struct dev_pm_opp *opp;
+ int ret;
+
+ opp = dev_pm_opp_find_freq_floor(perf_dev, &clk_freq);
+ if (IS_ERR(opp)) {
+ dev_err(se->dev, "failed to find opp for freq %lu\n", clk_freq);
+ return PTR_ERR(opp);
+ }
+
+ ret = dev_pm_opp_set_opp(perf_dev, opp);
+ dev_pm_opp_put(opp);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_opp);
+
/**
* geni_se_domain_attach() - Attach power domains to a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 5f75159c5531..c5e6ab85df09 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -550,5 +550,9 @@ int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
int geni_se_domain_attach(struct geni_se *se);
+
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level);
+
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:16 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Add DT bindings for the QUP GENI I2C controller on sa8255p platforms.
SA8255p platform abstracts resources such as clocks, interconnect and
GPIO pins configuration in Firmware. SCMI power and perf protocol
are utilized to request resource configurations.
SA8255p platform does not require the Serial Engine (SE) common properties
as the SE firmware is loaded and managed by the TrustZone (TZ) secure
environment.
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>
Co-developed-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v2->v3:
- Added Reviewed-by tag
v1->v2:
Krzysztof:
- Added dma properties in example node
- Removed minItems from power-domains property
- Added in commit text about common property
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
diff --git a/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
new file mode 100644
index 000000000000..a61e40b5cbc1
--- /dev/null
+++ b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/i2c/qcom,sa8255p-geni-i2c.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm SA8255p QUP GENI I2C Controller
+
+maintainers:
+ - Praveen Talari <praveen.talari@oss.qualcomm.com>
+
+properties:
+ compatible:
+ const: qcom,sa8255p-geni-i2c
+
+ reg:
+ maxItems: 1
+
+ dmas:
+ maxItems: 2
+
+ dma-names:
+ items:
+ - const: tx
+ - const: rx
+
+ interrupts:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 2
+
+ power-domain-names:
+ items:
+ - const: power
+ - const: perf
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - power-domains
+
+allOf:
+ - $ref: /schemas/i2c/i2c-controller.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/dma/qcom-gpi.h>
+
+ i2c@a90000 {
+ compatible = "qcom,sa8255p-geni-i2c";
+ reg = <0xa90000 0x4000>;
+ interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&gpi_dma0 0 0 QCOM_GPI_I2C>,
+ <&gpi_dma0 1 0 QCOM_GPI_I2C>;
+ dma-names = "tx", "rx";
+ power-domains = <&scmi0_pd 0>, <&scmi0_dvfs 0>;
+ power-domain-names = "power", "perf";
+ };
+...
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:17 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Refactor the resource initialization in geni_i2c_probe() by introducing
a new geni_i2c_resources_init() function and utilizing the common
geni_se_resources_init() framework and clock frequency mapping, making the
probe function cleaner.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 53 ++++++++++++------------------
1 file changed, 21 insertions(+), 32 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 58c32ffbd150..a4b13022e508 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1042,6 +1042,23 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+{
+ int ret;
+
+ ret = geni_se_resources_init(&gi2c->se);
+ if (ret)
+ return ret;
+
+ ret = geni_i2c_clk_map_idx(gi2c);
+ if (ret)
+ return dev_err_probe(gi2c->se.dev, ret, "Invalid clk frequency %d Hz\n",
+ gi2c->clk_freq_out);
+
+ return geni_icc_set_bw_ab(&gi2c->se, GENI_DEFAULT_BW, GENI_DEFAULT_BW,
+ Bps_to_icc(gi2c->clk_freq_out));
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
@@ -1061,16 +1078,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
desc = device_get_match_data(&pdev->dev);
- if (desc && desc->has_core_clk) {
- gi2c->core_clk = devm_clk_get(dev, "core");
- if (IS_ERR(gi2c->core_clk))
- return PTR_ERR(gi2c->core_clk);
- }
-
- gi2c->se.clk = devm_clk_get(dev, "se");
- if (IS_ERR(gi2c->se.clk) && !has_acpi_companion(dev))
- return PTR_ERR(gi2c->se.clk);
-
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
if (ret) {
@@ -1085,16 +1092,15 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (gi2c->irq < 0)
return gi2c->irq;
- ret = geni_i2c_clk_map_idx(gi2c);
- if (ret)
- return dev_err_probe(dev, ret, "Invalid clk frequency %d Hz\n",
- gi2c->clk_freq_out);
-
gi2c->adap.algo = &geni_i2c_algo;
init_completion(&gi2c->done);
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
+ ret = geni_i2c_resources_init(gi2c);
+ if (ret)
+ return ret;
+
/* Keep interrupts disabled initially to allow for low-power modes */
ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN,
dev_name(dev), gi2c);
@@ -1107,23 +1113,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
gi2c->adap.dev.of_node = dev->of_node;
strscpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name));
- ret = geni_icc_get(&gi2c->se, desc ? desc->icc_ddr : "qup-memory");
- if (ret)
- return ret;
- /*
- * Set the bus quota for core and cpu to a reasonable value for
- * register access.
- * Set quota for DDR based on bus speed.
- */
- gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW;
- gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
- if (!desc || desc->icc_ddr)
- gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out);
-
- ret = geni_icc_set_bw(&gi2c->se);
- if (ret)
- return ret;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:19 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Moving the serial engine setup to geni_i2c_init() API for a cleaner
probe function and utilizes the PM runtime API to control resources
instead of direct clock-related APIs for better resource management.
Enables reusability of the serial engine initialization like
hibernation and deep sleep features where hardware context is lost.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
Bjorn:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 154 ++++++++++++++---------------
1 file changed, 73 insertions(+), 81 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 3a04016db2c3..58c32ffbd150 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -976,10 +976,75 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_init(struct geni_i2c_dev *gi2c)
+{
+ const struct geni_i2c_desc *desc = NULL;
+ u32 proto, tx_depth;
+ bool fifo_disable;
+ int ret;
+
+ ret = pm_runtime_resume_and_get(gi2c->se.dev);
+ if (ret < 0) {
+ dev_err(gi2c->se.dev, "error turning on device :%d\n", ret);
+ return ret;
+ }
+
+ proto = geni_se_read_proto(&gi2c->se);
+ if (proto == GENI_SE_INVALID_PROTO) {
+ ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
+ if (ret) {
+ dev_err_probe(gi2c->se.dev, ret, "i2c firmware load failed ret: %d\n", ret);
+ goto err;
+ }
+ } else if (proto != GENI_SE_I2C) {
+ ret = dev_err_probe(gi2c->se.dev, -ENXIO, "Invalid proto %d\n", proto);
+ goto err;
+ }
+
+ desc = device_get_match_data(gi2c->se.dev);
+ if (desc && desc->no_dma_support)
+ fifo_disable = false;
+ else
+ fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
+
+ if (fifo_disable) {
+ /* FIFO is disabled, so we can only use GPI DMA */
+ gi2c->gpi_mode = true;
+ ret = setup_gpi_dma(gi2c);
+ if (ret)
+ goto err;
+
+ dev_dbg(gi2c->se.dev, "Using GPI DMA mode for I2C\n");
+ } else {
+ gi2c->gpi_mode = false;
+ tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
+
+ /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
+ if (!tx_depth && desc)
+ tx_depth = desc->tx_fifo_depth;
+
+ if (!tx_depth) {
+ ret = dev_err_probe(gi2c->se.dev, -EINVAL,
+ "Invalid TX FIFO depth\n");
+ goto err;
+ }
+
+ gi2c->tx_wm = tx_depth - 1;
+ geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
+ geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
+ PACKING_BYTES_PW, true, true, true);
+
+ dev_dbg(gi2c->se.dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
+ }
+
+err:
+ pm_runtime_put(gi2c->se.dev);
+ return ret;
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
- u32 proto, tx_depth, fifo_disable;
int ret;
struct device *dev = &pdev->dev;
const struct geni_i2c_desc *desc = NULL;
@@ -1059,100 +1124,27 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- return ret;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning on resources\n");
- goto err_clk;
- }
- proto = geni_se_read_proto(&gi2c->se);
- if (proto == GENI_SE_INVALID_PROTO) {
- ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
- if (ret) {
- dev_err_probe(dev, ret, "i2c firmware load failed ret: %d\n", ret);
- goto err_resources;
- }
- } else if (proto != GENI_SE_I2C) {
- ret = dev_err_probe(dev, -ENXIO, "Invalid proto %d\n", proto);
- goto err_resources;
- }
-
- if (desc && desc->no_dma_support)
- fifo_disable = false;
- else
- fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
-
- if (fifo_disable) {
- /* FIFO is disabled, so we can only use GPI DMA */
- gi2c->gpi_mode = true;
- ret = setup_gpi_dma(gi2c);
- if (ret)
- goto err_resources;
-
- dev_dbg(dev, "Using GPI DMA mode for I2C\n");
- } else {
- gi2c->gpi_mode = false;
- tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
-
- /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
-
- if (!tx_depth) {
- ret = dev_err_probe(dev, -EINVAL,
- "Invalid TX FIFO depth\n");
- goto err_resources;
- }
-
- gi2c->tx_wm = tx_depth - 1;
- geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
- geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
- PACKING_BYTES_PW, true, true, true);
-
- dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
- }
-
- clk_disable_unprepare(gi2c->core_clk);
- ret = geni_se_resources_off(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning off resources\n");
- goto err_dma;
- }
-
- ret = geni_icc_disable(&gi2c->se);
- if (ret)
- goto err_dma;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
pm_runtime_use_autosuspend(gi2c->se.dev);
pm_runtime_enable(gi2c->se.dev);
+ ret = geni_i2c_init(gi2c);
+ if (ret < 0) {
+ pm_runtime_disable(gi2c->se.dev);
+ return ret;
+ }
+
ret = i2c_add_adapter(&gi2c->adap);
if (ret) {
dev_err_probe(dev, ret, "Error adding i2c adapter\n");
pm_runtime_disable(gi2c->se.dev);
- goto err_dma;
+ return ret;
}
dev_dbg(dev, "Geni-I2C adaptor successfully added\n");
- return ret;
-
-err_resources:
- geni_se_resources_off(&gi2c->se);
-err_clk:
- clk_disable_unprepare(gi2c->core_clk);
-
- return ret;
-
-err_dma:
- release_gpi_dma(gi2c);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:18 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
To manage GENI serial engine resources during runtime power management,
drivers currently need to call functions for ICC, clock, and
SE resource operations in both suspend and resume paths, resulting in
code duplication across drivers.
The new geni_se_resources_activate() and geni_se_resources_deactivate()
helper APIs addresses this issue by providing a streamlined method to
enable or disable all resources based, thereby eliminating redundancy
across drivers.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
Bjorn:
- Remove geni_se_resources_state() API.
- Used geni_se_resources_activate() and geni_se_resources_deactivate()
to enable/disable resources.
---
drivers/i2c/busses/i2c-qcom-geni.c | 28 +++++-----------------------
1 file changed, 5 insertions(+), 23 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index a4b13022e508..b0a18e3d57d9 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1160,18 +1160,15 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
disable_irq(gi2c->irq);
- ret = geni_se_resources_off(&gi2c->se);
+
+ ret = geni_se_resources_deactivate(&gi2c->se);
if (ret) {
enable_irq(gi2c->irq);
return ret;
-
- } else {
- gi2c->suspended = 1;
}
- clk_disable_unprepare(gi2c->core_clk);
-
- return geni_icc_disable(&gi2c->se);
+ gi2c->suspended = 1;
+ return ret;
}
static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
@@ -1179,28 +1176,13 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
int ret;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
- ret = geni_icc_enable(&gi2c->se);
+ ret = geni_se_resources_activate(&gi2c->se);
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- goto out_icc_disable;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret)
- goto out_clk_disable;
-
enable_irq(gi2c->irq);
gi2c->suspended = 0;
- return 0;
-
-out_clk_disable:
- clk_disable_unprepare(gi2c->core_clk);
-out_icc_disable:
- geni_icc_disable(&gi2c->se);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:20 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
To avoid repeatedly fetching and checking platform data across various
functions, store the struct of_device_id data directly in the i2c
private structure. This change enhances code maintainability and reduces
redundancy.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/i2c/busses/i2c-qcom-geni.c | 32 ++++++++++++++++--------------
1 file changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index b0a18e3d57d9..1c9356e13b97 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -77,6 +77,13 @@ enum geni_i2c_err_code {
#define XFER_TIMEOUT HZ
#define RST_TIMEOUT HZ
+struct geni_i2c_desc {
+ bool has_core_clk;
+ char *icc_ddr;
+ bool no_dma_support;
+ unsigned int tx_fifo_depth;
+};
+
#define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2
/**
@@ -121,13 +128,7 @@ struct geni_i2c_dev {
bool is_tx_multi_desc_xfer;
u32 num_msgs;
struct geni_i2c_gpi_multi_desc_xfer i2c_multi_desc_config;
-};
-
-struct geni_i2c_desc {
- bool has_core_clk;
- char *icc_ddr;
- bool no_dma_support;
- unsigned int tx_fifo_depth;
+ const struct geni_i2c_desc *dev_data;
};
struct geni_i2c_err_log {
@@ -978,7 +979,6 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
static int geni_i2c_init(struct geni_i2c_dev *gi2c)
{
- const struct geni_i2c_desc *desc = NULL;
u32 proto, tx_depth;
bool fifo_disable;
int ret;
@@ -1001,8 +1001,7 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
goto err;
}
- desc = device_get_match_data(gi2c->se.dev);
- if (desc && desc->no_dma_support)
+ if (gi2c->dev_data->no_dma_support)
fifo_disable = false;
else
fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
@@ -1020,8 +1019,8 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
/* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
+ if (!tx_depth && gi2c->dev_data->has_core_clk)
+ tx_depth = gi2c->dev_data->tx_fifo_depth;
if (!tx_depth) {
ret = dev_err_probe(gi2c->se.dev, -EINVAL,
@@ -1064,7 +1063,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
struct geni_i2c_dev *gi2c;
int ret;
struct device *dev = &pdev->dev;
- const struct geni_i2c_desc *desc = NULL;
gi2c = devm_kzalloc(dev, sizeof(*gi2c), GFP_KERNEL);
if (!gi2c)
@@ -1076,7 +1074,7 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (IS_ERR(gi2c->se.base))
return PTR_ERR(gi2c->se.base);
- desc = device_get_match_data(&pdev->dev);
+ gi2c->dev_data = device_get_match_data(&pdev->dev);
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
@@ -1215,6 +1213,10 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
NULL)
};
+static const struct geni_i2c_desc geni_i2c = {
+ .icc_ddr = "qup-memory",
+};
+
static const struct geni_i2c_desc i2c_master_hub = {
.has_core_clk = true,
.icc_ddr = NULL,
@@ -1223,7 +1225,7 @@ static const struct geni_i2c_desc i2c_master_hub = {
};
static const struct of_device_id geni_i2c_dt_match[] = {
- { .compatible = "qcom,geni-i2c" },
+ { .compatible = "qcom,geni-i2c", .data = &geni_i2c },
{ .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub },
{}
};
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:21 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power on/off.
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
V1->v2:
- Initialized ret to "0" in resume/suspend callbacks.
Bjorn:
- Used seperate APIs for the resouces enable/disable.
---
drivers/i2c/busses/i2c-qcom-geni.c | 53 ++++++++++++++++++++++--------
1 file changed, 40 insertions(+), 13 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 1c9356e13b97..72457b98f155 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -82,6 +82,10 @@ struct geni_i2c_desc {
char *icc_ddr;
bool no_dma_support;
unsigned int tx_fifo_depth;
+ int (*resources_init)(struct geni_se *se);
+ int (*set_rate)(struct geni_se *se, unsigned long freq);
+ int (*power_on)(struct geni_se *se);
+ int (*power_off)(struct geni_se *se);
};
#define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2
@@ -203,8 +207,9 @@ static int geni_i2c_clk_map_idx(struct geni_i2c_dev *gi2c)
return -EINVAL;
}
-static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c)
+static int qcom_geni_i2c_conf(struct geni_se *se, unsigned long freq)
{
+ struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev);
const struct geni_i2c_clk_fld *itr = gi2c->clk_fld;
u32 val;
@@ -217,6 +222,7 @@ static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c)
val |= itr->t_low_cnt << LOW_COUNTER_SHFT;
val |= itr->t_cycle_cnt;
writel_relaxed(val, gi2c->se.base + SE_I2C_SCL_COUNTERS);
+ return 0;
}
static void geni_i2c_err_misc(struct geni_i2c_dev *gi2c)
@@ -908,7 +914,9 @@ static int geni_i2c_xfer(struct i2c_adapter *adap,
return ret;
}
- qcom_geni_i2c_conf(gi2c);
+ ret = gi2c->dev_data->set_rate(&gi2c->se, gi2c->clk_freq_out);
+ if (ret)
+ return ret;
if (gi2c->gpi_mode)
ret = geni_i2c_gpi_xfer(gi2c, msgs, num);
@@ -1041,8 +1049,9 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
-static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+static int geni_i2c_resources_init(struct geni_se *se)
{
+ struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev);
int ret;
ret = geni_se_resources_init(&gi2c->se);
@@ -1095,7 +1104,7 @@ static int geni_i2c_probe(struct platform_device *pdev)
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
- ret = geni_i2c_resources_init(gi2c);
+ ret = gi2c->dev_data->resources_init(&gi2c->se);
if (ret)
return ret;
@@ -1154,15 +1163,17 @@ static void geni_i2c_shutdown(struct platform_device *pdev)
static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
{
- int ret;
+ int ret = 0;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
disable_irq(gi2c->irq);
- ret = geni_se_resources_deactivate(&gi2c->se);
- if (ret) {
- enable_irq(gi2c->irq);
- return ret;
+ if (gi2c->dev_data->power_off) {
+ ret = gi2c->dev_data->power_off(&gi2c->se);
+ if (ret) {
+ enable_irq(gi2c->irq);
+ return ret;
+ }
}
gi2c->suspended = 1;
@@ -1171,12 +1182,14 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
{
- int ret;
+ int ret = 0;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
- ret = geni_se_resources_activate(&gi2c->se);
- if (ret)
- return ret;
+ if (gi2c->dev_data->power_on) {
+ ret = gi2c->dev_data->power_on(&gi2c->se);
+ if (ret)
+ return ret;
+ }
enable_irq(gi2c->irq);
gi2c->suspended = 0;
@@ -1215,6 +1228,10 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
static const struct geni_i2c_desc geni_i2c = {
.icc_ddr = "qup-memory",
+ .resources_init = geni_i2c_resources_init,
+ .set_rate = qcom_geni_i2c_conf,
+ .power_on = geni_se_resources_activate,
+ .power_off = geni_se_resources_deactivate,
};
static const struct geni_i2c_desc i2c_master_hub = {
@@ -1222,11 +1239,21 @@ static const struct geni_i2c_desc i2c_master_hub = {
.icc_ddr = NULL,
.no_dma_support = true,
.tx_fifo_depth = 16,
+ .resources_init = geni_i2c_resources_init,
+ .set_rate = qcom_geni_i2c_conf,
+ .power_on = geni_se_resources_activate,
+ .power_off = geni_se_resources_deactivate,
+};
+
+static const struct geni_i2c_desc sa8255p_geni_i2c = {
+ .resources_init = geni_se_domain_attach,
+ .set_rate = geni_se_set_perf_opp,
};
static const struct of_device_id geni_i2c_dt_match[] = {
{ .compatible = "qcom,geni-i2c", .data = &geni_i2c },
{ .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub },
+ { .compatible = "qcom,sa8255p-geni-i2c", .data = &sa8255p_geni_i2c },
{}
};
MODULE_DEVICE_TABLE(of, geni_i2c_dt_match);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:22 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi Mukesh,
Anyone from Qualcomm willing to take a look here, please? Mukesh?
Viken?
Thanks,
Andi
|
{
"author": "Andi Shyti <andi.shyti@kernel.org>",
"date": "Wed, 14 Jan 2026 16:05:53 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Minor comment.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
On 1/12/2026 4:17 PM, Praveen Talari wrote:
Double space.
|
{
"author": "Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>",
"date": "Wed, 21 Jan 2026 13:17:20 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
On 1/12/2026 4:17 PM, Praveen Talari wrote:
|
{
"author": "Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>",
"date": "Wed, 21 Jan 2026 14:27:22 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
On 1/12/2026 4:17 PM, Praveen Talari wrote:
|
{
"author": "Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>",
"date": "Wed, 21 Jan 2026 14:28:46 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
On 1/12/2026 4:17 PM, Praveen Talari wrote:
|
{
"author": "Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>",
"date": "Wed, 21 Jan 2026 14:29:33 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
On 1/12/2026 4:17 PM, Praveen Talari wrote:
|
{
"author": "Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>",
"date": "Wed, 21 Jan 2026 15:51:16 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
This calls dev_pm_opp_set_rate(se->dev, 0), dropping the performance state
vote, but the other function doesn't have a counterpart to bring it back
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:05:21 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
The second argument is a NOP after patch 1.. originally I think I had a
cross-subsys patch to get rid of that, neither solution is exactly pretty..
But otherwise, this looks good
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:10:15 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
[...]
All clk APIs already perform a null-check internally, perhaps this call
could be merged into geni_se_clks_off()?
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:11:31 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
[...]
Please sprinkle somewhere into this kerneldoc that this is specifically
for the SCMI-auto-VM setup, otherwise "the necessary power domains"
is at best confusing to an external reader
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:12:59 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
[...]
This function is never used
I think with the SPI driver in mind (which seems to do a simple rateset
for both backends) we could do:
Then, we can do struct device * perf_dev = se->dev;
if (se->pd_list && se->pd_list->pd_devs[DOMAIN_IDX_PERF])
perf_dev = se->pd_list->pd_devs[DOMAIN_IDX_PERF];
and reuse it in both cases, completely transparently to the caller
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:23:12 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
Again, this should be a NOP after patch 1, that we can drop
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:29:28 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/12/26 11:47 AM, Praveen Talari wrote:
[...]
This means, on SCMI devices you won't don't the vote on the POWER
domain (or PERF for that matter) and switch the GPIOs to a _suspend
state - is that by design?
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 13:34:47 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi Konrad,
On 1/30/2026 6:04 PM, Konrad Dybcio wrote:
With PD_FLAG_DEV_LINK_ON enabled, every pm_runtime_get_sync() or
pm_runtime_put_sync() on the device triggers a corresponding genpd
on/off transition. These transitions are translated into SCMI
power‑domain commands, allowing the firmware (GearVM) to perform the
actual enable/disable sequencing.
Thanks,
Praveen.
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 22:14:12 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi Konrad,
On 1/30/2026 5:35 PM, Konrad Dybcio wrote:
This does not apply to I²C, since I²C lacks an OPP table, so the
callback is only relevant for SPI and UART. All the refactored APIs were
added as generic interfaces shared across I²C, SPI, and UART.
Thanks,
Praveen Talari
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 22:18:07 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi Konrad
On 1/30/2026 5:53 PM, Konrad Dybcio wrote:
it will be used by UART driver, not for I2C/SPI.
APIs were added as generic interfaces shared across I²C/SPI which is
specific to firmware control, not Linux control.
I don't think, it is needed since this is specific to firmware control,
not Linux control.
Thanks,
Praveen Talari
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 22:24:06 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/30/2026 5:42 PM, Konrad Dybcio wrote:
will update in next patch.
Thanks.
Praveen Talari
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 22:25:12 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi Konrad
On 1/30/2026 5:40 PM, Konrad Dybcio wrote:
I will drop the second argument once these changes are ported across
UART and SPI as well.
Thanks,
Praveen Talari
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Fri, 30 Jan 2026 22:30:04 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
On 1/30/26 5:44 PM, Praveen Talari wrote:
Does that handle the >1 pd case too? If so, then all good
Konrad
|
{
"author": "Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 15:53:53 +0100",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi Konrad,
On 1/30/2026 5:59 PM, Konrad Dybcio wrote:
Will do in next patch.
Thanks,
Praveen
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 21:49:07 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi
On 1/30/2026 5:42 PM, Konrad Dybcio wrote:
Sure, will do in next patch.
Thanks,
Praveen
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 21:49:37 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Hi
On 1/30/2026 5:41 PM, Konrad Dybcio wrote:
Sure, will do in next patch.
Thanks,
Praveen
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 21:50:09 +0530",
"thread_id": "20260112104722.591521-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] bpf/verifier: Expand the usage scenarios of bpf_kptr_xchg
|
From: Chengkaitao <chengkaitao@kylinos.cn>
When using bpf_kptr_xchg, we triggered the following error:
31: (85) call bpf_kptr_xchg#194
function calls are not allowed while holding a lock
bpf_kptr_xchg can now be used in lock-held contexts, so we extended
its usage scope in [patch 1/2].
When writing test cases using bpf_kptr_xchg and bpf_rbtree_*, the
following approach must be followed:
bpf_spin_lock(&lock);
rb_n = bpf_rbtree_root(&root);
while (rb_n && can_loop) {
rb_n = bpf_rbtree_remove(&root, rb_n);
if (!rb_n)
goto fail;
tnode = container_of(rb_n, struct tree_node, node);
node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
if (!node_data)
goto fail;
data = node_data->data;
/* use data to do something */
node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
if (node_data)
goto fail;
bpf_rbtree_add(&root, rb_n, less);
if (lookup_key < tnode->key)
rb_n = bpf_rbtree_left(&root, rb_n);
else
rb_n = bpf_rbtree_right(&root, rb_n);
}
bpf_spin_unlock(&lock);
The above illustrates a lock-remove-read-add-unlock workflow, which
exhibits lower performance. To address this, we introduced support
for a streamlined lock-read-unlock operation in [patch 2/2].
Changes in v4:
- Fix the dead logic issue in the test case
Changes in v3:
- Fix compilation errors
Changes in v2:
- Allow using bpf_kptr_xchg even if the NON_OWN_REF flag is set
- Add test case
Link to V3:
https://lore.kernel.org/all/20260202055818.78231-1-pilgrimtao@gmail.com/
Link to V2:
https://lore.kernel.org/all/20260201031607.32940-1-pilgrimtao@gmail.com/
Link to V1:
https://lore.kernel.org/all/20260122081426.78472-1-pilgrimtao@gmail.com/
Chengkaitao (3):
bpf/verifier: allow calling bpf_kptr_xchg while holding a lock
bpf/verifier: allow using bpf_kptr_xchg even if the NON_OWN_REF flag
is set
selftests/bpf: Add supplementary tests for bpf_kptr_xchg
kernel/bpf/verifier.c | 7 +-
.../testing/selftests/bpf/prog_tests/rbtree.c | 6 +
tools/testing/selftests/bpf/progs/bpf_misc.h | 4 +
.../selftests/bpf/progs/rbtree_search_kptr.c | 167 ++++++++++++++++++
4 files changed, 182 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
--
2.50.1 (Apple Git-155)
|
From: Chengkaitao <chengkaitao@kylinos.cn>
For the following scenario:
struct tree_node {
struct bpf_rb_node node;
struct request __kptr *req;
u64 key;
};
struct bpf_rb_root tree_root __contains(tree_node, node);
struct bpf_spin_lock tree_lock;
If we need to traverse all nodes in the rbtree, retrieve the __kptr
pointer from each node, and read kernel data from the referenced
object, using bpf_kptr_xchg appears unavoidable.
This patch skips the BPF verifier checks for bpf_kptr_xchg when
called while holding a lock.
Signed-off-by: Chengkaitao <chengkaitao@kylinos.cn>
---
kernel/bpf/verifier.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3135643d5695..05a6a6606b6c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20387,7 +20387,8 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state)
if (env->cur_state->active_locks) {
if ((insn->src_reg == BPF_REG_0 &&
- insn->imm != BPF_FUNC_spin_unlock) ||
+ insn->imm != BPF_FUNC_spin_unlock &&
+ insn->imm != BPF_FUNC_kptr_xchg) ||
(insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
(insn->off != 0 || !kfunc_spin_allowed(insn->imm)))) {
verbose(env,
--
2.50.1 (Apple Git-155)
|
{
"author": "Chengkaitao <pilgrimtao@gmail.com>",
"date": "Mon, 2 Feb 2026 17:00:49 +0800",
"thread_id": "20260202090051.87802-1-pilgrimtao@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] bpf/verifier: Expand the usage scenarios of bpf_kptr_xchg
|
From: Chengkaitao <chengkaitao@kylinos.cn>
When using bpf_kptr_xchg, we triggered the following error:
31: (85) call bpf_kptr_xchg#194
function calls are not allowed while holding a lock
bpf_kptr_xchg can now be used in lock-held contexts, so we extended
its usage scope in [patch 1/2].
When writing test cases using bpf_kptr_xchg and bpf_rbtree_*, the
following approach must be followed:
bpf_spin_lock(&lock);
rb_n = bpf_rbtree_root(&root);
while (rb_n && can_loop) {
rb_n = bpf_rbtree_remove(&root, rb_n);
if (!rb_n)
goto fail;
tnode = container_of(rb_n, struct tree_node, node);
node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
if (!node_data)
goto fail;
data = node_data->data;
/* use data to do something */
node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
if (node_data)
goto fail;
bpf_rbtree_add(&root, rb_n, less);
if (lookup_key < tnode->key)
rb_n = bpf_rbtree_left(&root, rb_n);
else
rb_n = bpf_rbtree_right(&root, rb_n);
}
bpf_spin_unlock(&lock);
The above illustrates a lock-remove-read-add-unlock workflow, which
exhibits lower performance. To address this, we introduced support
for a streamlined lock-read-unlock operation in [patch 2/2].
Changes in v4:
- Fix the dead logic issue in the test case
Changes in v3:
- Fix compilation errors
Changes in v2:
- Allow using bpf_kptr_xchg even if the NON_OWN_REF flag is set
- Add test case
Link to V3:
https://lore.kernel.org/all/20260202055818.78231-1-pilgrimtao@gmail.com/
Link to V2:
https://lore.kernel.org/all/20260201031607.32940-1-pilgrimtao@gmail.com/
Link to V1:
https://lore.kernel.org/all/20260122081426.78472-1-pilgrimtao@gmail.com/
Chengkaitao (3):
bpf/verifier: allow calling bpf_kptr_xchg while holding a lock
bpf/verifier: allow using bpf_kptr_xchg even if the NON_OWN_REF flag
is set
selftests/bpf: Add supplementary tests for bpf_kptr_xchg
kernel/bpf/verifier.c | 7 +-
.../testing/selftests/bpf/prog_tests/rbtree.c | 6 +
tools/testing/selftests/bpf/progs/bpf_misc.h | 4 +
.../selftests/bpf/progs/rbtree_search_kptr.c | 167 ++++++++++++++++++
4 files changed, 182 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
--
2.50.1 (Apple Git-155)
|
From: Chengkaitao <chengkaitao@kylinos.cn>
When traversing an rbtree using bpf_rbtree_left/right, if bpf_kptr_xchg
is used to access the __kptr pointer contained in a node, it currently
requires first removing the node with bpf_rbtree_remove and clearing the
NON_OWN_REF flag, then re-adding the node to the original rbtree with
bpf_rbtree_add after usage. This process significantly degrades rbtree
traversal performance. The patch enables accessing __kptr pointers with
the NON_OWN_REF flag set while holding the lock, eliminating the need
for this remove-read-readd sequence.
Signed-off-by: Chengkaitao <chengkaitao@kylinos.cn>
Signed-off-by: Feng Yang <yangfeng@kylinos.cn>
---
kernel/bpf/verifier.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 05a6a6606b6c..bb3ff4bbb3a2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9260,7 +9260,8 @@ static const struct bpf_reg_types timer_types = { .types = { PTR_TO_MAP_VALUE }
static const struct bpf_reg_types kptr_xchg_dest_types = {
.types = {
PTR_TO_MAP_VALUE,
- PTR_TO_BTF_ID | MEM_ALLOC
+ PTR_TO_BTF_ID | MEM_ALLOC,
+ PTR_TO_BTF_ID | MEM_ALLOC | NON_OWN_REF
}
};
static const struct bpf_reg_types dynptr_types = {
@@ -9420,6 +9421,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
}
case PTR_TO_BTF_ID | MEM_ALLOC:
case PTR_TO_BTF_ID | MEM_PERCPU | MEM_ALLOC:
+ case PTR_TO_BTF_ID | MEM_ALLOC | NON_OWN_REF:
if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock &&
meta->func_id != BPF_FUNC_kptr_xchg) {
verifier_bug(env, "unimplemented handling of MEM_ALLOC");
--
2.50.1 (Apple Git-155)
|
{
"author": "Chengkaitao <pilgrimtao@gmail.com>",
"date": "Mon, 2 Feb 2026 17:00:50 +0800",
"thread_id": "20260202090051.87802-1-pilgrimtao@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] bpf/verifier: Expand the usage scenarios of bpf_kptr_xchg
|
From: Chengkaitao <chengkaitao@kylinos.cn>
When using bpf_kptr_xchg, we triggered the following error:
31: (85) call bpf_kptr_xchg#194
function calls are not allowed while holding a lock
bpf_kptr_xchg can now be used in lock-held contexts, so we extended
its usage scope in [patch 1/2].
When writing test cases using bpf_kptr_xchg and bpf_rbtree_*, the
following approach must be followed:
bpf_spin_lock(&lock);
rb_n = bpf_rbtree_root(&root);
while (rb_n && can_loop) {
rb_n = bpf_rbtree_remove(&root, rb_n);
if (!rb_n)
goto fail;
tnode = container_of(rb_n, struct tree_node, node);
node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
if (!node_data)
goto fail;
data = node_data->data;
/* use data to do something */
node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
if (node_data)
goto fail;
bpf_rbtree_add(&root, rb_n, less);
if (lookup_key < tnode->key)
rb_n = bpf_rbtree_left(&root, rb_n);
else
rb_n = bpf_rbtree_right(&root, rb_n);
}
bpf_spin_unlock(&lock);
The above illustrates a lock-remove-read-add-unlock workflow, which
exhibits lower performance. To address this, we introduced support
for a streamlined lock-read-unlock operation in [patch 2/2].
Changes in v4:
- Fix the dead logic issue in the test case
Changes in v3:
- Fix compilation errors
Changes in v2:
- Allow using bpf_kptr_xchg even if the NON_OWN_REF flag is set
- Add test case
Link to V3:
https://lore.kernel.org/all/20260202055818.78231-1-pilgrimtao@gmail.com/
Link to V2:
https://lore.kernel.org/all/20260201031607.32940-1-pilgrimtao@gmail.com/
Link to V1:
https://lore.kernel.org/all/20260122081426.78472-1-pilgrimtao@gmail.com/
Chengkaitao (3):
bpf/verifier: allow calling bpf_kptr_xchg while holding a lock
bpf/verifier: allow using bpf_kptr_xchg even if the NON_OWN_REF flag
is set
selftests/bpf: Add supplementary tests for bpf_kptr_xchg
kernel/bpf/verifier.c | 7 +-
.../testing/selftests/bpf/prog_tests/rbtree.c | 6 +
tools/testing/selftests/bpf/progs/bpf_misc.h | 4 +
.../selftests/bpf/progs/rbtree_search_kptr.c | 167 ++++++++++++++++++
4 files changed, 182 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
--
2.50.1 (Apple Git-155)
|
From: Chengkaitao <chengkaitao@kylinos.cn>
1. Allow using bpf_kptr_xchg while holding a lock.
2. When the rb_node contains a __kptr pointer, we do not need to
perform a remove-read-add operation.
This patch implements the following workflow:
1. Construct a rbtree with 16 elements.
2. Traverse the rbtree, locate the kptr pointer in the target node,
and read the content pointed to by the pointer.
3. Remove all nodes from the rbtree.
Signed-off-by: Chengkaitao <chengkaitao@kylinos.cn>
Signed-off-by: Feng Yang <yangfeng@kylinos.cn>
---
.../testing/selftests/bpf/prog_tests/rbtree.c | 6 +
tools/testing/selftests/bpf/progs/bpf_misc.h | 4 +
.../selftests/bpf/progs/rbtree_search_kptr.c | 167 ++++++++++++++++++
3 files changed, 177 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
diff --git a/tools/testing/selftests/bpf/prog_tests/rbtree.c b/tools/testing/selftests/bpf/prog_tests/rbtree.c
index d8f3d7a45fe9..a854fb38e418 100644
--- a/tools/testing/selftests/bpf/prog_tests/rbtree.c
+++ b/tools/testing/selftests/bpf/prog_tests/rbtree.c
@@ -9,6 +9,7 @@
#include "rbtree_btf_fail__wrong_node_type.skel.h"
#include "rbtree_btf_fail__add_wrong_type.skel.h"
#include "rbtree_search.skel.h"
+#include "rbtree_search_kptr.skel.h"
static void test_rbtree_add_nodes(void)
{
@@ -193,3 +194,8 @@ void test_rbtree_search(void)
{
RUN_TESTS(rbtree_search);
}
+
+void test_rbtree_search_kptr(void)
+{
+ RUN_TESTS(rbtree_search_kptr);
+}
diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
index c9bfbe1bafc1..0904fe14ad1d 100644
--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -188,6 +188,10 @@
#define POINTER_VALUE 0xbadcafe
#define TEST_DATA_LEN 64
+#ifndef __aligned
+#define __aligned(x) __attribute__((aligned(x)))
+#endif
+
#ifndef __used
#define __used __attribute__((used))
#endif
diff --git a/tools/testing/selftests/bpf/progs/rbtree_search_kptr.c b/tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
new file mode 100644
index 000000000000..069fc64b0167
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
@@ -0,0 +1,167 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 KylinSoft Corporation. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+#define NR_NODES 16
+
+struct node_data {
+ int data;
+};
+
+struct tree_node {
+ struct bpf_rb_node node;
+ u64 key;
+ struct node_data __kptr * node_data;
+};
+
+#define private(name) SEC(".data." #name) __hidden __aligned(8)
+
+private(A) struct bpf_rb_root root __contains(tree_node, node);
+private(A) struct bpf_spin_lock lock;
+
+static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
+{
+ struct tree_node *node_a, *node_b;
+
+ node_a = container_of(a, struct tree_node, node);
+ node_b = container_of(b, struct tree_node, node);
+
+ return node_a->key < node_b->key;
+}
+
+SEC("syscall")
+__retval(0)
+long rbtree_search_kptr(void *ctx)
+{
+ struct tree_node *tnode;
+ struct bpf_rb_node *rb_n;
+ struct node_data __kptr * node_data;
+ int lookup_key = NR_NODES / 2;
+ int lookup_data = NR_NODES / 2;
+ int i, data, ret = 0;
+
+ for (i = 0; i < NR_NODES && can_loop; i++) {
+ tnode = bpf_obj_new(typeof(*tnode));
+ if (!tnode)
+ return __LINE__;
+
+ node_data = bpf_obj_new(typeof(*node_data));
+ if (!node_data) {
+ bpf_obj_drop(tnode);
+ return __LINE__;
+ }
+
+ tnode->key = i;
+ node_data->data = i;
+
+ node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
+ if (node_data)
+ bpf_obj_drop(node_data);
+
+ bpf_spin_lock(&lock);
+ bpf_rbtree_add(&root, &tnode->node, less);
+ bpf_spin_unlock(&lock);
+ }
+
+ bpf_spin_lock(&lock);
+ rb_n = bpf_rbtree_root(&root);
+ while (rb_n && can_loop) {
+ tnode = container_of(rb_n, struct tree_node, node);
+ node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
+ if (!node_data) {
+ ret = __LINE__;
+ goto fail;
+ }
+
+ data = node_data->data;
+ node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
+ if (node_data) {
+ bpf_spin_unlock(&lock);
+ bpf_obj_drop(node_data);
+ return __LINE__;
+ }
+
+ if (lookup_key == tnode->key) {
+ if (data == lookup_data)
+ break;
+
+ ret = __LINE__;
+ goto fail;
+ }
+
+ if (lookup_key < tnode->key)
+ rb_n = bpf_rbtree_left(&root, rb_n);
+ else
+ rb_n = bpf_rbtree_right(&root, rb_n);
+ }
+ bpf_spin_unlock(&lock);
+
+ while (can_loop) {
+ bpf_spin_lock(&lock);
+ rb_n = bpf_rbtree_first(&root);
+ if (!rb_n) {
+ bpf_spin_unlock(&lock);
+ return 0;
+ }
+
+ rb_n = bpf_rbtree_remove(&root, rb_n);
+ if (!rb_n) {
+ ret = __LINE__;
+ goto fail;
+ }
+ bpf_spin_unlock(&lock);
+
+ tnode = container_of(rb_n, struct tree_node, node);
+
+ node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
+ if (node_data)
+ bpf_obj_drop(node_data);
+
+ bpf_obj_drop(tnode);
+ }
+
+ return 0;
+fail:
+ bpf_spin_unlock(&lock);
+ return ret;
+}
+
+
+SEC("syscall")
+__failure __msg("R1 type=scalar expected=map_value, ptr_, ptr_")
+long non_own_ref_kptr_xchg_no_lock(void *ctx)
+{
+ struct tree_node *tnode;
+ struct bpf_rb_node *rb_n;
+ struct node_data __kptr * node_data;
+ int data;
+
+ bpf_spin_lock(&lock);
+ rb_n = bpf_rbtree_first(&root);
+ if (!rb_n) {
+ bpf_spin_unlock(&lock);
+ return __LINE__;
+ }
+ bpf_spin_unlock(&lock);
+
+ tnode = container_of(rb_n, struct tree_node, node);
+ node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
+ if (!node_data)
+ return __LINE__;
+
+ data = node_data->data;
+ if (data < 0)
+ return __LINE__;
+
+ node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
+ if (node_data)
+ return __LINE__;
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.50.1 (Apple Git-155)
|
{
"author": "Chengkaitao <pilgrimtao@gmail.com>",
"date": "Mon, 2 Feb 2026 17:00:51 +0800",
"thread_id": "20260202090051.87802-1-pilgrimtao@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] bpf/verifier: Expand the usage scenarios of bpf_kptr_xchg
|
From: Chengkaitao <chengkaitao@kylinos.cn>
When using bpf_kptr_xchg, we triggered the following error:
31: (85) call bpf_kptr_xchg#194
function calls are not allowed while holding a lock
bpf_kptr_xchg can now be used in lock-held contexts, so we extended
its usage scope in [patch 1/2].
When writing test cases using bpf_kptr_xchg and bpf_rbtree_*, the
following approach must be followed:
bpf_spin_lock(&lock);
rb_n = bpf_rbtree_root(&root);
while (rb_n && can_loop) {
rb_n = bpf_rbtree_remove(&root, rb_n);
if (!rb_n)
goto fail;
tnode = container_of(rb_n, struct tree_node, node);
node_data = bpf_kptr_xchg(&tnode->node_data, NULL);
if (!node_data)
goto fail;
data = node_data->data;
/* use data to do something */
node_data = bpf_kptr_xchg(&tnode->node_data, node_data);
if (node_data)
goto fail;
bpf_rbtree_add(&root, rb_n, less);
if (lookup_key < tnode->key)
rb_n = bpf_rbtree_left(&root, rb_n);
else
rb_n = bpf_rbtree_right(&root, rb_n);
}
bpf_spin_unlock(&lock);
The above illustrates a lock-remove-read-add-unlock workflow, which
exhibits lower performance. To address this, we introduced support
for a streamlined lock-read-unlock operation in [patch 2/2].
Changes in v4:
- Fix the dead logic issue in the test case
Changes in v3:
- Fix compilation errors
Changes in v2:
- Allow using bpf_kptr_xchg even if the NON_OWN_REF flag is set
- Add test case
Link to V3:
https://lore.kernel.org/all/20260202055818.78231-1-pilgrimtao@gmail.com/
Link to V2:
https://lore.kernel.org/all/20260201031607.32940-1-pilgrimtao@gmail.com/
Link to V1:
https://lore.kernel.org/all/20260122081426.78472-1-pilgrimtao@gmail.com/
Chengkaitao (3):
bpf/verifier: allow calling bpf_kptr_xchg while holding a lock
bpf/verifier: allow using bpf_kptr_xchg even if the NON_OWN_REF flag
is set
selftests/bpf: Add supplementary tests for bpf_kptr_xchg
kernel/bpf/verifier.c | 7 +-
.../testing/selftests/bpf/prog_tests/rbtree.c | 6 +
tools/testing/selftests/bpf/progs/bpf_misc.h | 4 +
.../selftests/bpf/progs/rbtree_search_kptr.c | 167 ++++++++++++++++++
4 files changed, 182 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/rbtree_search_kptr.c
--
2.50.1 (Apple Git-155)
|
On Mon, Feb 2, 2026 at 1:01 AM Chengkaitao <pilgrimtao@gmail.com> wrote:
You ignored earlier feedback. This is not ok.
pw-bot: cr
|
{
"author": "Alexei Starovoitov <alexei.starovoitov@gmail.com>",
"date": "Mon, 2 Feb 2026 09:56:50 -0800",
"thread_id": "20260202090051.87802-1-pilgrimtao@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
|
x86/mm/pat should be using ptdescs. One line has already been
converted to pagetable_free(), while the allocation sites use
get_free_pages(). This causes issues separately allocating ptdescs
from struct page.
These patches convert the allocation/free sites to use ptdescs. In
the short term, this helps enable Matthew's work to allocate frozen
pagetables[1]. And in the long term, this will help us cleanly split
ptdesc allocations from struct page.
The pgd_list should also be using ptdescs (for 32bit in this file). This
can be done in a different patchset since there's other users of pgd_list
that still need to be converted.
[1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/
[2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/
------
I've also tested this on a tree that separately allocates ptdescs. That
didn't find any lingering alloc/free issues.
Based on current mm-new.
v3:
- Move comment regarding 32-bit conversions into the cover letter
- Correct the handling for the pagetable_alloc() error path
Vishal Moola (Oracle) (3):
x86/mm/pat: Convert pte code to use ptdescs
x86/mm/pat: Convert pmd code to use ptdescs
x86/mm/pat: Convert split_large_page() to use ptdescs
arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++---------------
1 file changed, 33 insertions(+), 23 deletions(-)
--
2.52.0
|
In order to separately allocate ptdescs from pages, we need all allocation
and free sites to use the appropriate functions. Convert these pte
allocation/free sites to use ptdescs.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
arch/x86/mm/pat/set_memory.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 6c6eb486f7a6..f9f9d4ca8e71 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte)
if (!pte_none(pte[i]))
return false;
- free_page((unsigned long)pte);
+ pagetable_free(virt_to_ptdesc((void *)pte));
return true;
}
@@ -1537,12 +1537,15 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
*/
}
-static int alloc_pte_page(pmd_t *pmd)
+static int alloc_pte_ptdesc(pmd_t *pmd)
{
- pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
- if (!pte)
+ pte_t *pte;
+ struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+
+ if (!ptdesc)
return -1;
+ pte = (pte_t *) ptdesc_address(ptdesc);
set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
return 0;
}
@@ -1600,7 +1603,7 @@ static long populate_pmd(struct cpa_data *cpa,
*/
pmd = pmd_offset(pud, start);
if (pmd_none(*pmd))
- if (alloc_pte_page(pmd))
+ if (alloc_pte_ptdesc(pmd))
return -1;
populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot);
@@ -1641,7 +1644,7 @@ static long populate_pmd(struct cpa_data *cpa,
if (start < end) {
pmd = pmd_offset(pud, start);
if (pmd_none(*pmd))
- if (alloc_pte_page(pmd))
+ if (alloc_pte_ptdesc(pmd))
return -1;
populate_pte(cpa, start, end, num_pages - cur_pages,
--
2.52.0
|
{
"author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>",
"date": "Mon, 2 Feb 2026 09:20:03 -0800",
"thread_id": "20260202172005.683870-4-vishal.moola@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
|
x86/mm/pat should be using ptdescs. One line has already been
converted to pagetable_free(), while the allocation sites use
get_free_pages(). This causes issues separately allocating ptdescs
from struct page.
These patches convert the allocation/free sites to use ptdescs. In
the short term, this helps enable Matthew's work to allocate frozen
pagetables[1]. And in the long term, this will help us cleanly split
ptdesc allocations from struct page.
The pgd_list should also be using ptdescs (for 32bit in this file). This
can be done in a different patchset since there's other users of pgd_list
that still need to be converted.
[1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/
[2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/
------
I've also tested this on a tree that separately allocates ptdescs. That
didn't find any lingering alloc/free issues.
Based on current mm-new.
v3:
- Move comment regarding 32-bit conversions into the cover letter
- Correct the handling for the pagetable_alloc() error path
Vishal Moola (Oracle) (3):
x86/mm/pat: Convert pte code to use ptdescs
x86/mm/pat: Convert pmd code to use ptdescs
x86/mm/pat: Convert split_large_page() to use ptdescs
arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++---------------
1 file changed, 33 insertions(+), 23 deletions(-)
--
2.52.0
|
In order to separately allocate ptdescs from pages, we need all allocation
and free sites to use the appropriate functions.
split_large_page() allocates a page to be used as a page table. This
should be allocating a ptdesc, so convert it.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
arch/x86/mm/pat/set_memory.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 9f531c87531b..52226679d079 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1119,9 +1119,10 @@ static void split_set_pte(struct cpa_data *cpa, pte_t *pte, unsigned long pfn,
static int
__split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
- struct page *base)
+ struct ptdesc *ptdesc)
{
unsigned long lpaddr, lpinc, ref_pfn, pfn, pfninc = 1;
+ struct page *base = ptdesc_page(ptdesc);
pte_t *pbase = (pte_t *)page_address(base);
unsigned int i, level;
pgprot_t ref_prot;
@@ -1226,18 +1227,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
unsigned long address)
{
- struct page *base;
+ struct ptdesc *ptdesc;
if (!debug_pagealloc_enabled())
spin_unlock(&cpa_lock);
- base = alloc_pages(GFP_KERNEL, 0);
+ ptdesc = pagetable_alloc(GFP_KERNEL, 0);
if (!debug_pagealloc_enabled())
spin_lock(&cpa_lock);
- if (!base)
+ if (!ptdesc)
return -ENOMEM;
- if (__split_large_page(cpa, kpte, address, base))
- __free_page(base);
+ if (__split_large_page(cpa, kpte, address, ptdesc))
+ pagetable_free(ptdesc);
return 0;
}
--
2.52.0
|
{
"author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>",
"date": "Mon, 2 Feb 2026 09:20:05 -0800",
"thread_id": "20260202172005.683870-4-vishal.moola@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
|
x86/mm/pat should be using ptdescs. One line has already been
converted to pagetable_free(), while the allocation sites use
get_free_pages(). This causes issues separately allocating ptdescs
from struct page.
These patches convert the allocation/free sites to use ptdescs. In
the short term, this helps enable Matthew's work to allocate frozen
pagetables[1]. And in the long term, this will help us cleanly split
ptdesc allocations from struct page.
The pgd_list should also be using ptdescs (for 32bit in this file). This
can be done in a different patchset since there's other users of pgd_list
that still need to be converted.
[1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/
[2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/
------
I've also tested this on a tree that separately allocates ptdescs. That
didn't find any lingering alloc/free issues.
Based on current mm-new.
v3:
- Move comment regarding 32-bit conversions into the cover letter
- Correct the handling for the pagetable_alloc() error path
Vishal Moola (Oracle) (3):
x86/mm/pat: Convert pte code to use ptdescs
x86/mm/pat: Convert pmd code to use ptdescs
x86/mm/pat: Convert split_large_page() to use ptdescs
arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++---------------
1 file changed, 33 insertions(+), 23 deletions(-)
--
2.52.0
|
In order to separately allocate ptdescs from pages, we need all allocation
and free sites to use the appropriate functions. Convert these pmd
allocation/free sites to use ptdescs.
populate_pgd() also allocates pagetables that may later be freed by
try_to_free_pmd_page(), so allocate ptdescs there as well.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
arch/x86/mm/pat/set_memory.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index f9f9d4ca8e71..9f531c87531b 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1420,7 +1420,7 @@ static bool try_to_free_pmd_page(pmd_t *pmd)
if (!pmd_none(pmd[i]))
return false;
- free_page((unsigned long)pmd);
+ pagetable_free(virt_to_ptdesc((void *)pmd));
return true;
}
@@ -1550,12 +1550,15 @@ static int alloc_pte_ptdesc(pmd_t *pmd)
return 0;
}
-static int alloc_pmd_page(pud_t *pud)
+static int alloc_pmd_ptdesc(pud_t *pud)
{
- pmd_t *pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
- if (!pmd)
+ pmd_t *pmd;
+ struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+
+ if (!ptdesc)
return -1;
+ pmd = (pmd_t *) ptdesc_address(ptdesc);
set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
return 0;
}
@@ -1625,7 +1628,7 @@ static long populate_pmd(struct cpa_data *cpa,
* We cannot use a 1G page so allocate a PMD page if needed.
*/
if (pud_none(*pud))
- if (alloc_pmd_page(pud))
+ if (alloc_pmd_ptdesc(pud))
return -1;
pmd = pmd_offset(pud, start);
@@ -1681,7 +1684,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
* Need a PMD page?
*/
if (pud_none(*pud))
- if (alloc_pmd_page(pud))
+ if (alloc_pmd_ptdesc(pud))
return -1;
cur_pages = populate_pmd(cpa, start, pre_end, cur_pages,
@@ -1718,7 +1721,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
pud = pud_offset(p4d, start);
if (pud_none(*pud))
- if (alloc_pmd_page(pud))
+ if (alloc_pmd_ptdesc(pud))
return -1;
tmp = populate_pmd(cpa, start, end, cpa->numpages - cur_pages,
@@ -1742,14 +1745,16 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
p4d_t *p4d;
pgd_t *pgd_entry;
long ret;
+ struct ptdesc *ptdesc;
pgd_entry = cpa->pgd + pgd_index(addr);
if (pgd_none(*pgd_entry)) {
- p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL);
- if (!p4d)
+ ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+ if (!ptdesc)
return -1;
+ p4d = (p4d_t *) ptdesc_address(ptdesc);
set_pgd(pgd_entry, __pgd(__pa(p4d) | _KERNPG_TABLE));
}
@@ -1758,10 +1763,11 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
*/
p4d = p4d_offset(pgd_entry, addr);
if (p4d_none(*p4d)) {
- pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
- if (!pud)
+ ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+ if (!ptdesc)
return -1;
+ pud = (pud_t *) ptdesc_address(ptdesc);
set_p4d(p4d, __p4d(__pa(pud) | _KERNPG_TABLE));
}
--
2.52.0
|
{
"author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>",
"date": "Mon, 2 Feb 2026 09:20:04 -0800",
"thread_id": "20260202172005.683870-4-vishal.moola@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
On Sun, May 25, 2025 at 10:08 PM 戸田晃太 <kota.toda@gmo-cybersecurity.com> wrote:
I do not see any barrier ?
All these updates probably need WRITE_ONCE(), and corresponding
READ_ONCE() on reader sides, at a very minimum ...
RCU would even be better later.
|
{
"author": "Eric Dumazet <edumazet@google.com>",
"date": "Mon, 26 May 2025 01:23:18 -0700",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
Thank you for your review.
2025年5月26日(月) 17:23 Eric Dumazet <edumazet@google.com>:
I believe that locking is not necessary in this patch. The update of
`header_ops` only happens when a slave is newly enslaved to a bond.
Under such circumstances, members of `header_ops` are not called in
parallel with updating. Therefore, there is no possibility of race
conditions occurring.
|
{
"author": "=?UTF-8?B?5oi455Sw5pmD5aSq?= <kota.toda@gmo-cybersecurity.com>",
"date": "Wed, 28 May 2025 23:35:59 +0900",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
On Wed, May 28, 2025 at 7:36 AM 戸田晃太 <kota.toda@gmo-cybersecurity.com> wrote:
bond_dev can certainly be live, and packets can flow.
I have seen enough syzbot reports hinting at this precise issue.
|
{
"author": "Eric Dumazet <edumazet@google.com>",
"date": "Wed, 28 May 2025 08:10:08 -0700",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
2025年5月29日(木) 0:10 Eric Dumazet <edumazet@google.com>:
Hi Eric, Thank you for reviewing the patch.
At the beginning of `bond_setup_by_slave`, `dev_close(bond_dev)` is called,
meaning bond_dev is down and no packets can flow during the update of
`bond_header_ops`.
The syzbot report (you mentioned in the conversation in security@) indicating
`dev->header_ops` becoming NULL should be resolved by this patch.
I couldn't find any other related syzbot reports.
|
{
"author": "=?UTF-8?B?5oi455Sw5pmD5aSq?= <kota.toda@gmo-cybersecurity.com>",
"date": "Fri, 6 Jun 2025 16:16:31 +0900",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
Hello, Eric and other maintainers,
I'm deeply sorry to have left the patch suggestion for this long period.
I became extremely busy, and that took its toll on my health, causing
me to take sick leave for nearly half a year (And my colleague Kota
had been waiting for me to come back).
As fortunately I've recovered and returned to work, I hope to move
forward with this matter as well.
Recalling the issue Eric raised, I understand it was a concern about
potential race conditions arising from the `bond_header_ops` and
`header_slave_dev` added to the `struct bonding`. For example, one
could imagine a situation where `header_slave_dev` is rewritten to a
different type, and at that exact moment a function from the old
`bond_header_ops` gets called, or vice versa.
However, I am actually skeptical that this can happen. The reason is
that `bond_setup_by_slave` is only called when there are no slaves at
all:
```
bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
struct netlink_ext_ack *extack)
...
if (!bond_has_slaves(bond)) {
...
if (slave_dev->type != ARPHRD_ETHER)
bond_setup_by_slave(bond_dev, slave_dev);
```
In other words, in order to trigger a race condition, one would need
to remove the slave once and make the slave list empty first. However,
as shown below, in `bond_release_and_destroy`, when the slave list
becomes empty, it appears that the bond interface itself is removed.
This makes it seem impossible to "quickly remove a slave and
re-register it.":
```
static int bond_slave_netdev_event(unsigned long event,
struct net_device *slave_dev)
...
switch (event) {
case NETDEV_UNREGISTER:
if (bond_dev->type != ARPHRD_ETHER)
bond_release_and_destroy(bond_dev, slave_dev);
...
}
...
/* First release a slave and then destroy the bond if no more slaves are left.
* Must be under rtnl_lock when this function is called.
*/
static int bond_release_and_destroy(struct net_device *bond_dev,
struct net_device *slave_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
int ret;
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
bond_dev->priv_flags |= IFF_DISABLE_NETPOLL;
netdev_info(bond_dev, "Destroying bond\n");
bond_remove_proc_entry(bond);
unregister_netdevice(bond_dev);
}
return ret;
}
```
Moreover, as noted in the comments, these functions are executed under
the netlink-side lock. Therefore, my conclusion is that a race
condition cannot actually occur. I also think that the fact that, even
before our patch, these code paths had almost no explicit locking
anywhere serves as circumstantial evidence for this view. As Kota
said, as far as I saw, the past syzkaller-bot's report is seemingly
only NULL pointer dereference due to the root cause we reported, and
this patch should fix them.
That said, even so, I agree that the kind of countermeasures Eric
suggests are worth applying if they do not cause problems in terms of
execution speed or code size. However, I am concerned that addressing
this with READ_ONCE or RCU would imply a somewhat large amount of
rewriting.
`header_ops` is designed to allow various types of devices to be
handled in an object-oriented way, and as such it is used throughout
many parts of the Linux kernel. Using READ_ONCE or RCU every time
header_ops is accessed simply because we are worried about a race
condition in bond’s header_ops seems to imply changes like the
following, for example:
```
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 92dc1f1788de..d9aad38746ad 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1538,7 +1538,7 @@ static void neigh_hh_init(struct neighbour *n)
* hh_cache entry.
*/
if (!hh->hh_len)
- dev->header_ops->cache(n, hh, prot);
+ READ_ONCE(dev->header_ops->cache)(n, hh, prot);
write_unlock_bh(&n->lock);
}
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3131,35 +3131,41 @@ static inline int dev_hard_header(struct
sk_buff *skb, struct net_device *dev,
const void *daddr, const void *saddr,
unsigned int len)
{
- if (!dev->header_ops || !dev->header_ops->create)
+ int (*create)(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len);
+ if (!dev->header_ops || !(create = READ_ONCE(dev->header_ops->create)))
return 0;
- return dev->header_ops->create(skb, dev, type, daddr, saddr, len);
+ return create(skb, dev, type, daddr, saddr, len);
}
static inline int dev_parse_header(const struct sk_buff *skb,
unsigned char *haddr)
{
+ int (*parse)(const struct sk_buff *skb, unsigned char *haddr);
const struct net_device *dev = skb->dev;
- if (!dev->header_ops || !dev->header_ops->parse)
+ if (!dev->header_ops || !(parse = READ_ONCE(dev->header_ops->parse)))
return 0;
- return dev->header_ops->parse(skb, haddr);
+ return parse(skb, haddr);
}
... (and so on)
```
It looks like we would end up rewriting on the order of a dozen or so
places with this kind of pattern, but from the perspective of the
maintainers (or in terms of Linux kernel culture), would this be
considered an acceptable change?
If this differs from what you intended, please correct me.
Best regards,
Yuki Koike
2025年5月29日(木) 0:10 Eric Dumazet <edumazet@google.com>:
|
{
"author": "=?UTF-8?B?5bCP5rGg5oKg55Sf?= <yuki.koike@gmo-cybersecurity.com>",
"date": "Mon, 22 Dec 2025 17:20:02 +0900",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
Hello, Eric and other maintainers,
I hope you’re doing well. I’m following up on our email, sent during
the holiday season, in case it got buried.
When you have a moment, could you please let us know if you had a
chance to review it?
Thank you in advance, and I look forward to your response.
Best regards,
Kota Toda
2025年12月22日(月) 17:20 小池悠生 <yuki.koike@gmo-cybersecurity.com>:
|
{
"author": "=?UTF-8?B?5oi455Sw5pmD5aSq?= <kota.toda@gmo-cybersecurity.com>",
"date": "Thu, 15 Jan 2026 19:33:36 +0900",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
On Thu, Jan 15, 2026 at 11:33 AM 戸田晃太 <kota.toda@gmo-cybersecurity.com> wrote:
I think it would be nice to provide an actual stack trace of the bug,
on a recent kernel tree.
We had recent patches dealing with dev->hard_header_len changes.
|
{
"author": "Eric Dumazet <edumazet@google.com>",
"date": "Thu, 15 Jan 2026 12:06:56 +0100",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
Thanks for your quick response.
The following information is based on Linux kernel version 6.12.65,
the latest release in the 6.12 tree.
The kernel config is identical to that of the kernelCTF instance
(available at: https://storage.googleapis.com/kernelctf-build/releases/lts-6.12.65/.config)
This type confusion occurs in several locations, including,
for example, `ipgre_header` (`header_ops->create`),
where the private data of the network device is incorrectly cast as
`struct ip_tunnel *`.
```
static int ipgre_header(struct sk_buff *skb, struct net_device *dev,
unsigned short type,
const void *daddr, const void *saddr, unsigned int len)
{
struct ip_tunnel *t = netdev_priv(dev);
struct iphdr *iph;
struct gre_base_hdr *greh;
...
```
When a bond interface is given to this function,
it should not reference the private data as `struct ip_tunnel *`,
because the bond interface uses the private data as `struct bonding *`.
(quickly confirmed by seeing drivers/net/bonding/bond_netlink.c:909)
```
struct rtnl_link_ops bond_link_ops __read_mostly = {
.kind = "bond",
.priv_size = sizeof(struct bonding),
...
```
The stack trace below is the backtrace of all stack frame during a
call to `ipgre_header`.
```
ipgre_header at net/ipv4/ip_gre.c:890
dev_hard_header at ./include/linux/netdevice.h:3156
packet_snd at net/packet/af_packet.c:3082
packet_sendmsg at net/packet/af_packet.c:3162
sock_sendmsg_nosec at net/socket.c:729
__sock_sendmsg at net/socket.c:744
__sys_sendto at net/socket.c:2213
__do_sys_sendto at net/socket.c:2225
__se_sys_sendto at net/socket.c:2221
__x64_sys_sendto at net/socket.c:2221
do_syscall_x64 at arch/x86/entry/common.c:47
do_syscall_64 at arch/x86/entry/common.c:78
entry_SYSCALL_64 at arch/x86/entry/entry_64.S:121
```
This causes memory corruption during subsequent operations.
The following stack trace shows a General Protection Fault triggered
when sending a packet
to a bonding interface that has an IPv4 GRE interface as a slave.
```
[ 1.712329] Oops: general protection fault, probably for
non-canonical address 0xdead0000cafebabe: 0000 [#1] SMP NOPTI
[ 1.712972] CPU: 0 UID: 1000 PID: 205 Comm: exp Not tainted 6.12.65 #1
[ 1.713344] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS Arch Linux 1.17.0-2-2 04/01/2014
[ 1.713890] RIP: 0010:skb_release_data+0x8a/0x1c0
[ 1.714162] Code: c0 00 00 00 49 03 86 c8 00 00 00 0f b6 10 f6 c2
01 74 48 48 8b 70 28 48 85 f6 74 3f 41 0f b6 5d 00 83 e3 10 40 f6 c6
01 75 24 <48> 8b 06 ba 01 00 00 00 4c 89 f7 48 8b 00 ff d0 0f 1f 00 41
8b6
[ 1.715276] RSP: 0018:ffffc900007cfcc0 EFLAGS: 00010246
[ 1.715583] RAX: ffff888106fe12c0 RBX: 0000000000000010 RCX: 0000000000000000
[ 1.716036] RDX: 0000000000000017 RSI: dead0000cafebabe RDI: ffff8881059c4a00
[ 1.716504] RBP: ffffc900007cfe10 R08: 0000000000000010 R09: 0000000000000000
[ 1.716955] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002
[ 1.717429] R13: ffff888106fe12c0 R14: ffff8881059c4a00 R15: ffff888106e57000
[ 1.717866] FS: 0000000038e54380(0000) GS:ffff88813bc00000(0000)
knlGS:0000000000000000
[ 1.718350] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.718703] CR2: 00000000004bf480 CR3: 00000001009ec001 CR4: 0000000000772ef0
[ 1.719109] PKRU: 55555554
[ 1.719297] Call Trace:
[ 1.719461] <TASK>
[ 1.719611] sk_skb_reason_drop+0x58/0x120
[ 1.719891] packet_sendmsg+0xbcb/0x18f0
[ 1.720166] ? pcpu_alloc_area+0x186/0x260
[ 1.720421] __sys_sendto+0x1e2/0x1f0
[ 1.720691] __x64_sys_sendto+0x24/0x30
[ 1.720948] do_syscall_64+0x58/0x120
[ 1.721174] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 1.721509] RIP: 0033:0x42860d
[ 1.721713] Code: c3 ff ff ff ff 64 89 02 eb b9 0f 1f 00 f3 0f 1e
fa 80 3d 5d 4a 09 00 00 41 89 ca 74 20 45 31 c9 45 31 c0 b8 2c 00 00
00 0f 05 <48> 3d 00 f0 ff ff 77 6b c3 66 2e 0f 1f 84 00 00 00 00 00 55
489
[ 1.722837] RSP: 002b:00007fff597e95e8 EFLAGS: 00000246 ORIG_RAX:
000000000000002c
[ 1.723315] RAX: ffffffffffffffda RBX: 00000000000003e8 RCX: 000000000042860d
[ 1.723721] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000310
[ 1.724103] RBP: 00007fff597e9880 R08: 0000000000000000 R09: 0000000000000000
[ 1.724565] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff597e99f8
[ 1.725010] R13: 00007fff597e9a08 R14: 00000000004b7828 R15: 0000000000000001
[ 1.725441] </TASK>
[ 1.725594] Modules linked in:
[ 1.725790] ---[ end trace 0000000000000000 ]---
[ 1.726057] RIP: 0010:skb_release_data+0x8a/0x1c0
[ 1.726339] Code: c0 00 00 00 49 03 86 c8 00 00 00 0f b6 10 f6 c2
01 74 48 48 8b 70 28 48 85 f6 74 3f 41 0f b6 5d 00 83 e3 10 40 f6 c6
01 75 24 <48> 8b 06 ba 01 00 00 00 4c 89 f7 48 8b 00 ff d0 0f 1f 00 41
8b6
[ 1.727285] RSP: 0018:ffffc900007cfcc0 EFLAGS: 00010246
[ 1.727623] RAX: ffff888106fe12c0 RBX: 0000000000000010 RCX: 0000000000000000
[ 1.728052] RDX: 0000000000000017 RSI: dead0000cafebabe RDI: ffff8881059c4a00
[ 1.728467] RBP: ffffc900007cfe10 R08: 0000000000000010 R09: 0000000000000000
[ 1.728908] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002
[ 1.729323] R13: ffff888106fe12c0 R14: ffff8881059c4a00 R15: ffff888106e57000
[ 1.729744] FS: 0000000038e54380(0000) GS:ffff88813bc00000(0000)
knlGS:0000000000000000
[ 1.730236] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.730597] CR2: 00000000004bf480 CR3: 00000001009ec001 CR4: 0000000000772ef0
[ 1.730988] PKRU: 55555554
```
2026年1月15日(木) 20:07 Eric Dumazet <edumazet@google.com>:
|
{
"author": "=?UTF-8?B?5oi455Sw5pmD5aSq?= <kota.toda@gmo-cybersecurity.com>",
"date": "Mon, 19 Jan 2026 14:36:04 +0900",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
On Mon, Jan 19, 2026 at 6:36 AM 戸田晃太 <kota.toda@gmo-cybersecurity.com> wrote:
OK thanks.
I will repeat my original feedback : I do not see any barriers in the
patch you sent.
Assuming bond_setup_by_slave() can be called multiple times during one
master lifetime, I do not think your patch is enough.
Also, please clarify what happens with stacks of two or more bonding devices ?
|
{
"author": "Eric Dumazet <edumazet@google.com>",
"date": "Mon, 19 Jan 2026 10:30:09 +0100",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
Here is the patch with the barriers added, based on v6.12.67.
However, as Yuki said, we are wondering if this would be considered an
acceptable change
from the perspective of the maintainers (or in terms of Linux kernel
culture). This is because
the patch adds `READ_ONCE` to several locations outside of bonding subsystem.
Please let me know if you have any concerns regarding this point.
To clarify, currently the `header_ops` of the bottom-most
interface are used regardless of the number of bonding layers.
This patch changes it so that `&bond->bond_header_ops` is used
as the bond device's `header_ops`, regardless of the stack depth.
```
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index f17a170d1..5ecc64e38 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1616,14 +1616,70 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
+struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ void (*cache_update)(struct hh_cache *hh,
+ const struct net_device *dev,
+ const unsigned char *haddr);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !(cache_update =
READ_ONCE(slave_dev->header_ops->cache_update)))
+ return;
+
+ cache_update(hh, slave_dev, haddr);
+}
+
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ WRITE_ONCE(bond->bond_header_ops.parse, slave_dev->header_ops->parse);
+ WRITE_ONCE(bond->bond_header_ops.cache, slave_dev->header_ops->cache);
+ WRITE_ONCE(bond->bond_header_ops.validate,
slave_dev->header_ops->validate);
+ WRITE_ONCE(bond->bond_header_ops.parse_protocol,
slave_dev->header_ops->parse_protocol);
+ } else {
+ WRITE_ONCE(bond->bond_header_ops.parse, NULL);
+ WRITE_ONCE(bond->bond_header_ops.cache, NULL);
+ WRITE_ONCE(bond->bond_header_ops.validate, NULL);
+ WRITE_ONCE(bond->bond_header_ops.parse_protocol, NULL);
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2682,6 +2738,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 77a99c8ab..0d2c1c852 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3150,35 +3150,41 @@ static inline int dev_hard_header(struct
sk_buff *skb, struct net_device *dev,
const void *daddr, const void *saddr,
unsigned int len)
{
- if (!dev->header_ops || !dev->header_ops->create)
+ int (*create)(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len);
+ if (!dev->header_ops || !(create = READ_ONCE(dev->header_ops->create)))
return 0;
- return dev->header_ops->create(skb, dev, type, daddr, saddr, len);
+ return create(skb, dev, type, daddr, saddr, len);
}
static inline int dev_parse_header(const struct sk_buff *skb,
unsigned char *haddr)
{
+ int (*parse)(const struct sk_buff *skb, unsigned char *haddr);
const struct net_device *dev = skb->dev;
- if (!dev->header_ops || !dev->header_ops->parse)
+ if (!dev->header_ops || !(parse = READ_ONCE(dev->header_ops->parse)))
return 0;
- return dev->header_ops->parse(skb, haddr);
+ return parse(skb, haddr);
}
static inline __be16 dev_parse_header_protocol(const struct sk_buff *skb)
{
+ __be16 (*parse_protocol)(const struct sk_buff *skb);
const struct net_device *dev = skb->dev;
- if (!dev->header_ops || !dev->header_ops->parse_protocol)
+ if (!dev->header_ops || !(parse_protocol =
READ_ONCE(dev->header_ops->parse_protocol)))
return 0;
- return dev->header_ops->parse_protocol(skb);
+ return parse_protocol(skb);
}
/* ll_header must have at least hard_header_len allocated */
static inline bool dev_validate_header(const struct net_device *dev,
char *ll_header, int len)
{
+ bool (*validate)(const char *ll_header, unsigned int len);
if (likely(len >= dev->hard_header_len))
return true;
if (len < dev->min_header_len)
@@ -3189,15 +3195,15 @@ static inline bool dev_validate_header(const
struct net_device *dev,
return true;
}
- if (dev->header_ops && dev->header_ops->validate)
- return dev->header_ops->validate(ll_header, len);
+ if (dev->header_ops && (validate = READ_ONCE(dev->header_ops->validate)))
+ return validate(ll_header, len);
return false;
}
static inline bool dev_has_header(const struct net_device *dev)
{
- return dev->header_ops && dev->header_ops->create;
+ return dev->header_ops && READ_ONCE(dev->header_ops->create);
}
/*
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308..c37800e17 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
diff --git a/include/net/cfg802154.h b/include/net/cfg802154.h
index 76d2cd2e2..dec638763 100644
--- a/include/net/cfg802154.h
+++ b/include/net/cfg802154.h
@@ -522,7 +522,7 @@ wpan_dev_hard_header(struct sk_buff *skb, struct
net_device *dev,
{
struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
- return wpan_dev->header_ops->create(skb, dev, daddr, saddr, len);
+ return READ_ONCE(wpan_dev->header_ops->create)(skb, dev, daddr,
saddr, len);
}
#endif
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 96786016d..ff948e35e 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1270,7 +1270,7 @@ static void neigh_update_hhs(struct neighbour *neigh)
= NULL;
if (neigh->dev->header_ops)
- update = neigh->dev->header_ops->cache_update;
+ update = READ_ONCE(neigh->dev->header_ops->cache_update);
if (update) {
hh = &neigh->hh;
@@ -1540,7 +1540,7 @@ static void neigh_hh_init(struct neighbour *n)
* hh_cache entry.
*/
if (!hh->hh_len)
- dev->header_ops->cache(n, hh, prot);
+ READ_ONCE(dev->header_ops->cache)(n, hh, prot);
write_unlock_bh(&n->lock);
}
@@ -1556,7 +1556,7 @@ int neigh_resolve_output(struct neighbour
*neigh, struct sk_buff *skb)
struct net_device *dev = neigh->dev;
unsigned int seq;
- if (dev->header_ops->cache && !READ_ONCE(neigh->hh.hh_len))
+ if (READ_ONCE(dev->header_ops->cache) && !READ_ONCE(neigh->hh.hh_len))
neigh_hh_init(neigh);
do {
diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index 7822b2144..421bea6eb 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -278,7 +278,7 @@ static int arp_constructor(struct neighbour *neigh)
memcpy(neigh->ha, dev->broadcast, dev->addr_len);
}
- if (dev->header_ops->cache)
+ if (READ_ONCE(dev->header_ops->cache))
neigh->ops = &arp_hh_ops;
else
neigh->ops = &arp_generic_ops;
diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
index d961e6c2d..d81f509ec 100644
--- a/net/ipv6/ndisc.c
+++ b/net/ipv6/ndisc.c
@@ -361,7 +361,7 @@ static int ndisc_constructor(struct neighbour *neigh)
neigh->nud_state = NUD_NOARP;
memcpy(neigh->ha, dev->broadcast, dev->addr_len);
}
- if (dev->header_ops->cache)
+ if (READ_ONCE(dev->header_ops->cache))
neigh->ops = &ndisc_hh_ops;
else
neigh->ops = &ndisc_generic_ops;
```
2026年1月19日(月) 18:30 Eric Dumazet <edumazet@google.com>:
|
{
"author": "=?UTF-8?B?5oi455Sw5pmD5aSq?= <kota.toda@gmo-cybersecurity.com>",
"date": "Wed, 28 Jan 2026 19:46:44 +0900",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH net] bonding: Fix header_ops type confusion
|
In bond_setup_by_slave(), the slave’s header_ops are unconditionally
copied into the bonding device. As a result, the bonding device may invoke
the slave-specific header operations on itself, causing
netdev_priv(bond_dev) (a struct bonding) to be incorrectly interpreted
as the slave's private-data type.
This type-confusion bug can lead to out-of-bounds writes into the skb,
resulting in memory corruption.
This patch adds two members to struct bonding, bond_header_ops and
header_slave_dev, to avoid type-confusion while keeping track of the
slave's header_ops.
Fixes: 1284cd3a2b740 (bonding: two small fixes for IPoIB support)
Signed-off-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
Signed-off-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Co-Developed-by: Yuki Koike <yuki.koike@gmo-cybersecurity.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reported-by: Kota Toda <kota.toda@gmo-cybersecurity.com>
---
drivers/net/bonding/bond_main.c | 61
++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/net/bonding.h | 5 +++++
2 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8ea183da8d53..690f3e0971d0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1619,14 +1619,65 @@ static void bond_compute_features(struct bonding *bond)
netdev_change_features(bond_dev);
}
+static int bond_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ return dev_hard_header(skb, slave_dev, type, daddr, saddr, len);
+}
+
+static void bond_header_cache_update(struct hh_cache *hh, const
struct net_device *dev,
+ const unsigned char *haddr)
+{
+ const struct bonding *bond = netdev_priv(dev);
+ struct net_device *slave_dev;
+
+ slave_dev = bond->header_slave_dev;
+
+ if (!slave_dev->header_ops || !slave_dev->header_ops->cache_update)
+ return;
+
+ slave_dev->header_ops->cache_update(hh, slave_dev, haddr);
+}
+
static void bond_setup_by_slave(struct net_device *bond_dev,
struct net_device *slave_dev)
{
+ struct bonding *bond = netdev_priv(bond_dev);
bool was_up = !!(bond_dev->flags & IFF_UP);
dev_close(bond_dev);
- bond_dev->header_ops = slave_dev->header_ops;
+ /* Some functions are given dev as an argument
+ * while others not. When dev is not given, we cannot
+ * find out what is the slave device through struct bonding
+ * (the private data of bond_dev). Therefore, we need a raw
+ * header_ops variable instead of its pointer to const header_ops
+ * and assign slave's functions directly.
+ * For the other case, we set the wrapper functions that pass
+ * slave_dev to the wrapped functions.
+ */
+ bond->bond_header_ops.create = bond_hard_header;
+ bond->bond_header_ops.cache_update = bond_header_cache_update;
+ if (slave_dev->header_ops) {
+ bond->bond_header_ops.parse = slave_dev->header_ops->parse;
+ bond->bond_header_ops.cache = slave_dev->header_ops->cache;
+ bond->bond_header_ops.validate = slave_dev->header_ops->validate;
+ bond->bond_header_ops.parse_protocol =
slave_dev->header_ops->parse_protocol;
+ } else {
+ bond->bond_header_ops.parse = NULL;
+ bond->bond_header_ops.cache = NULL;
+ bond->bond_header_ops.validate = NULL;
+ bond->bond_header_ops.parse_protocol = NULL;
+ }
+
+ bond->header_slave_dev = slave_dev;
+ bond_dev->header_ops = &bond->bond_header_ops;
bond_dev->type = slave_dev->type;
bond_dev->hard_header_len = slave_dev->hard_header_len;
@@ -2676,6 +2727,14 @@ static int bond_release_and_destroy(struct
net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
int ret;
+ /* If slave_dev is the earliest registered one, we must clear
+ * the variables related to header_ops to avoid dangling pointer.
+ */
+ if (bond->header_slave_dev == slave_dev) {
+ bond->header_slave_dev = NULL;
+ bond_dev->header_ops = NULL;
+ }
+
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond) &&
bond_dev->reg_state != NETREG_UNREGISTERING) {
diff --git a/include/net/bonding.h b/include/net/bonding.h
index 95f67b308c19..cf8206187ce9 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -215,6 +215,11 @@ struct bond_ipsec {
*/
struct bonding {
struct net_device *dev; /* first - useful for panic debug */
+ struct net_device *header_slave_dev; /* slave net_device for
header_ops */
+ /* maintained as a non-const variable
+ * because bond's header_ops should change depending on slaves.
+ */
+ struct header_ops bond_header_ops;
struct slave __rcu *curr_active_slave;
struct slave __rcu *current_arp_slave;
struct slave __rcu *primary_slave;
|
On Wed, Jan 28, 2026 at 11:46 AM 戸田晃太 <kota.toda@gmo-cybersecurity.com> wrote:
Could you try to cook a patch series perhaps ?
The READ_ONCE()/WRITE_ONCE() on dev->header_ops->cache could be done separately.
Thanks.
|
{
"author": "Eric Dumazet <edumazet@google.com>",
"date": "Mon, 2 Feb 2026 18:11:26 +0100",
"thread_id": "CANn89iLVEJKoBtYNbAdLgxsPr03Fkgi9CJmTh1a0y0L5fV-HNA@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] staging: sm750fb: rename Bpp to bpp
|
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by
checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..866b12c2a 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bpp,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bpp: Color depth of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bpp, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
On Mon, Feb 02, 2026 at 04:54:13PM +0200, yehudis9982 wrote:
What does "bpp" stand for? Perhaps spell it out further?
thanks,
greg k-h
|
{
"author": "Greg KH <gregkh@linuxfoundation.org>",
"date": "Mon, 2 Feb 2026 16:01:17 +0100",
"thread_id": "20260202165719.133879-1-y0533159982@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] staging: sm750fb: rename Bpp to bpp
|
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by
checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..866b12c2a 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bpp,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bpp: Color depth of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bpp, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
Rename the Bpp parameter to bytes_per_pixel for clarity and to avoid CamelCase, as reported by checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..3fe9429e1 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -48,7 +48,7 @@ void sm750_hw_de_init(struct lynx_accel *accel)
DE_STRETCH_FORMAT_ADDRESSING_MASK |
DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK;
- /* DE_STRETCH bpp format need be initialized in setMode routine */
+ /* DE_STRETCH bytes_per_pixel format need be initialized in setMode routine */
write_dpr(accel, DE_STRETCH_FORMAT,
(read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg);
@@ -76,7 +76,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
{
u32 reg;
- /* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */
+ /* fmt=0,1,2 for 8,16,32,bytes_per_pixel on sm718/750/502 */
reg = read_dpr(accel, DE_STRETCH_FORMAT);
reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK;
reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) &
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bytes_per_pixel,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bytes_per_pixel: Bytes per pixel (color depth / 8) of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bytes_per_pixel, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
{
"author": "yehudis9982 <y0533159982@gmail.com>",
"date": "Mon, 2 Feb 2026 18:46:45 +0200",
"thread_id": "20260202165719.133879-1-y0533159982@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] staging: sm750fb: rename Bpp to bpp
|
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by
checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..866b12c2a 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bpp,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bpp: Color depth of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bpp, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
Rename the Bpp parameter to bytes_per_pixel for clarity and to avoid CamelCase, as reported by checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..3fe9429e1 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -48,7 +48,7 @@ void sm750_hw_de_init(struct lynx_accel *accel)
DE_STRETCH_FORMAT_ADDRESSING_MASK |
DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK;
- /* DE_STRETCH bpp format need be initialized in setMode routine */
+ /* DE_STRETCH bytes_per_pixel format need be initialized in setMode routine */
write_dpr(accel, DE_STRETCH_FORMAT,
(read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg);
@@ -76,7 +76,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
{
u32 reg;
- /* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */
+ /* fmt=0,1,2 for 8,16,32,bytes_per_pixel on sm718/750/502 */
reg = read_dpr(accel, DE_STRETCH_FORMAT);
reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK;
reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) &
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bytes_per_pixel,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bytes_per_pixel: Bytes per pixel (color depth / 8) of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bytes_per_pixel, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
{
"author": "yehudis9982 <y0533159982@gmail.com>",
"date": "Mon, 2 Feb 2026 18:57:18 +0200",
"thread_id": "20260202165719.133879-1-y0533159982@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH] staging: sm750fb: rename Bpp to bpp
|
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by
checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..866b12c2a 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bpp,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bpp: Color depth of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bpp, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
Rename the Bpp parameter to bytes_per_pixel for clarity and to avoid CamelCase, as reported by checkpatch.pl.
Signed-off-by: yehudis9982 <y0533159982@gmail.com>
---
drivers/staging/sm750fb/sm750_accel.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index 046b9282b..3fe9429e1 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -48,7 +48,7 @@ void sm750_hw_de_init(struct lynx_accel *accel)
DE_STRETCH_FORMAT_ADDRESSING_MASK |
DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK;
- /* DE_STRETCH bpp format need be initialized in setMode routine */
+ /* DE_STRETCH bytes_per_pixel format need be initialized in setMode routine */
write_dpr(accel, DE_STRETCH_FORMAT,
(read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg);
@@ -76,7 +76,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
{
u32 reg;
- /* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */
+ /* fmt=0,1,2 for 8,16,32,bytes_per_pixel on sm718/750/502 */
reg = read_dpr(accel, DE_STRETCH_FORMAT);
reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK;
reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) &
@@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt)
}
int sm750_hw_fillrect(struct lynx_accel *accel,
- u32 base, u32 pitch, u32 Bpp,
+ u32 base, u32 pitch, u32 bytes_per_pixel,
u32 x, u32 y, u32 width, u32 height,
u32 color, u32 rop)
{
@@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((pitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (pitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((pitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
+ (pitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
@@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel,
* @sy: Starting y coordinate of source surface
* @dBase: Address of destination: offset in frame buffer
* @dPitch: Pitch value of destination surface in BYTE
- * @Bpp: Color depth of destination surface
+ * @bytes_per_pixel: Bytes per pixel (color depth / 8) of destination surface
* @dx: Starting x coordinate of destination surface
* @dy: Starting y coordinate of destination surface
* @width: width of rectangle in pixel value
@@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
unsigned int sBase, unsigned int sPitch,
unsigned int sx, unsigned int sy,
unsigned int dBase, unsigned int dPitch,
- unsigned int Bpp, unsigned int dx, unsigned int dy,
+ unsigned int bytes_per_pixel, unsigned int dx, unsigned int dy,
unsigned int width, unsigned int height,
unsigned int rop2)
{
@@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* pixel values. Need Byte to pixel conversion.
*/
write_dpr(accel, DE_PITCH,
- ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ ((dPitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) &
DE_PITCH_DESTINATION_MASK) |
- (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
+ (sPitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/*
* Screen Window width in Pixels.
@@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel,
* for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ ((dPitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) &
DE_WINDOW_WIDTH_DST_MASK) |
- (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
+ (sPitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
--
2.43.0
|
{
"author": "yehudis9982 <y0533159982@gmail.com>",
"date": "Mon, 2 Feb 2026 19:12:43 +0200",
"thread_id": "20260202165719.133879-1-y0533159982@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On Mon, Feb 02, 2026 at 06:27:38PM +0530, Pradeep P V K wrote:
I think this patch just papers over the bug. If dma_need_unmap
can't be trusted before the dma_map_* call, we've not saved
the unmap information and the unmap won't work properly.
So we'll need to extend the core code to tell if a mapping
will set dma_skip_sync=false before doing the mapping.
|
{
"author": "Christoph Hellwig <hch@lst.de>",
"date": "Mon, 2 Feb 2026 15:35:48 +0100",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On 2026-02-02 2:35 pm, Christoph Hellwig wrote:
The dma_need_unmap() kerneldoc says:
"This function must be called after all mappings that might
need to be unmapped have been performed."
Trying to infer anything from it beforehand is definitely a bug in the
caller.
I don't see that being possible - at best we could reasonably infer that
a fully-coherent system with no sync ops, no SWIOTLB and no DMA_DEBUG
shouldn't ever set it to true, but as for the other way round, by the
time you've run through all the SWIOTLB logic to guess whether a
particular mapping would be bounced or not, you've basically performed
the mapping anyway. Thus at best, such an API to potentially do a whole
dry-run mapping before every actual mapping would seem like a pretty
pointless anti-optimisation.
Thanks,
Robin.
|
{
"author": "Robin Murphy <robin.murphy@arm.com>",
"date": "Mon, 2 Feb 2026 15:16:50 +0000",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On Mon, Feb 02, 2026 at 03:35:48PM +0100, Christoph Hellwig wrote:
Agree
There are two paths that lead to SWIOTLB in dma_direct_map_phys().
The first is is_swiotlb_force_bounce(dev), which dma_need_unmap() can
easily evaluate. The second is more problematic, as it depends on
dma_addr and size, neither of which is available in dma_need_unmap():
102 if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||
103 dma_kmalloc_needs_bounce(dev, size, dir)) {
104 if (is_swiotlb_active(dev))
What about the following change?
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 37163eb49f9f..1510b93a8791 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -461,6 +461,8 @@ bool dma_need_unmap(struct device *dev)
{
if (!dma_map_direct(dev, get_dma_ops(dev)))
return true;
+ if (is_swiotlb_force_bounce(dev) || is_swiotlb_active(dev))
+ return true;
if (!dev->dma_skip_sync)
return true;
return IS_ENABLED(CONFIG_DMA_API_DEBUG);
|
{
"author": "Leon Romanovsky <leon@kernel.org>",
"date": "Mon, 2 Feb 2026 17:22:52 +0200",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On 2026-02-02 3:22 pm, Leon Romanovsky wrote:
This will always be true if a default SWIOTLB buffer exists at all, and
thus pretty much defeat the point of whatever optimisation the caller is
trying to make.
Thanks,
Robin.
|
{
"author": "Robin Murphy <robin.murphy@arm.com>",
"date": "Mon, 2 Feb 2026 15:26:25 +0000",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On Mon, Feb 02, 2026 at 03:16:50PM +0000, Robin Murphy wrote:
At least for HMM, dma_need_unmap() works as expected. HMM doesn't work
with SWIOTLB.
Thanks
|
{
"author": "Leon Romanovsky <leon@kernel.org>",
"date": "Mon, 2 Feb 2026 17:58:04 +0200",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On Mon, Feb 02, 2026 at 03:16:50PM +0000, Robin Murphy wrote:
Well that doesn't really make sense. No matter how many mappings the
driver has done, there will always be more. ?
|
{
"author": "Keith Busch <kbusch@kernel.org>",
"date": "Mon, 2 Feb 2026 10:13:24 -0700",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On Mon, Feb 02, 2026 at 06:27:38PM +0530, Pradeep P V K wrote:
So the return of dma_need_unmap() may change after any call to
dma_map_*? Does it only go from false -> true, and never back to false?
Since we didn't allocate the dma_vecs here, doesn't that mean the
completion side is leaking the mapping?
|
{
"author": "Keith Busch <kbusch@kernel.org>",
"date": "Mon, 2 Feb 2026 10:18:12 -0700",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On Mon, Feb 02, 2026 at 10:13:24AM -0700, Keith Busch wrote:
Yeah. It's more like if this returns true, all future calls, plus
the previous one (which might have caused this). For that something
like the patch below should work in nvme. Totally untested as I'm
about to head away from the desk and prepare dinner.
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..f944b747e1bd 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -816,6 +816,22 @@ static void nvme_unmap_data(struct request *req)
nvme_free_descriptors(req);
}
+static bool nvme_pci_alloc_dma_vecs(struct request *req,
+ struct blk_dma_iter *iter)
+{
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+ iod->dma_vecs = mempool_alloc(nvmeq->dev->dmavec_mempool,
+ GFP_ATOMIC);
+ if (!iod->dma_vecs)
+ return false;
+ iod->dma_vecs[0].addr = iter->addr;
+ iod->dma_vecs[0].len = iter->len;
+ iod->nr_dma_vecs = 1;
+ return true;
+}
+
static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
struct blk_dma_iter *iter)
{
@@ -826,6 +842,8 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (!iod->nr_dma_vecs && !nvme_pci_alloc_dma_vecs(req, iter))
+ return false;
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -844,13 +862,8 @@ static blk_status_t nvme_pci_setup_data_prp(struct request *req,
__le64 *prp_list;
if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(nvmeq->dev->dev)) {
- iod->dma_vecs = mempool_alloc(nvmeq->dev->dmavec_mempool,
- GFP_ATOMIC);
- if (!iod->dma_vecs)
+ if (!nvme_pci_alloc_dma_vecs(req, iter))
return BLK_STS_RESOURCE;
- iod->dma_vecs[0].addr = iter->addr;
- iod->dma_vecs[0].len = iter->len;
- iod->nr_dma_vecs = 1;
}
/*
|
{
"author": "Christoph Hellwig <hch@lst.de>",
"date": "Mon, 2 Feb 2026 18:36:24 +0100",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
|
Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.
The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.
The problem manifests when:
1. Device initially operates with dma_skip_sync=true
(coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
iod->dma_vecs
The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
(!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
dynamically after the initial allocation decision, leading to NULL
pointer access
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000000
pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
Call trace:
nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
nvme_prep_rq+0x5f4/0xa6c [nvme]
nvme_queue_rqs+0xa8/0x18c [nvme]
blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
blk_mq_flush_plug_list+0x8c/0x174
__blk_flush_plug+0xe4/0x140
blk_finish_plug+0x38/0x4c
read_pages+0x184/0x288
page_cache_ra_order+0x1e0/0x3a4
filemap_fault+0x518/0xa90
__do_fault+0x3c/0x22c
__handle_mm_fault+0x10ec/0x19b8
handle_mm_fault+0xb4/0x294
Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+ iod->dma_vecs = NULL;
}
static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
return true;
if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
return false;
- if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+ if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
iod->nr_descriptors = 0;
iod->total_len = 0;
iod->meta_total_len = 0;
+ iod->dma_vecs = NULL;
ret = nvme_setup_cmd(req->q->queuedata, req);
if (ret)
--
2.34.1
|
On 2026-02-02 5:13 pm, Keith Busch wrote:
But equally the fact that none of the mappings made so far happened to
not need bouncing still doesn't mean that future ones won't. This is not
guaranteed to be a static property of the device, but nor is it really a
property of the *device* at all; it's a property of a set of one or more
DMA mappings with the same lifetime, there's just no suitable generic
notion of that temporal context in the DMA API to carry around and pass
as an explicit argument, so it's left implicit in the usage model.
Whatever higher-level thing it's doing, the driver must have some
context, so within "operation A" it makes some DMA mappings, checks
dma_need_unmap() and sees it's false, so can conclude that "operation A"
does not need to preserve DMA unmap state. However it may then start
"operation B", do some more mappings, check dma_need_unmap() and see
it's now returned true, so "operation B" *does* need to keep the DMA
data and explicitly unmap it when it finishes.
This is essentially the point I made at the time about it not
necessarily being as useful a thing as it seems, since if an "operation"
involves multiple mappings, it must still store the full state of those
mappings for at least long enough to finish them all and then call
dma_need_unmap(), to only then see if it might be OK to throw that state
away again.
Thanks,
Robin.
|
{
"author": "Robin Murphy <robin.murphy@arm.com>",
"date": "Mon, 2 Feb 2026 17:39:14 +0000",
"thread_id": "aYDcVDVFTWrBwzw_@kbusch-mbp.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The "qup-memory" interconnect path is optional and may not be defined
in all device trees. Unroll the loop-based ICC path initialization to
allow specific error handling for each path type.
The "qup-core" and "qup-config" paths remain mandatory and will fail
probe if missing, while "qup-memory" is now handled as optional and
skipped when not present in the device tree.
Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
Bjorn:
- Updated commit text.
- Used local variable for more readable.
---
drivers/soc/qcom/qcom-geni-se.c | 36 +++++++++++++++++----------------
1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index cd1779b6a91a..b6167b968ef6 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -899,30 +899,32 @@ EXPORT_SYMBOL_GPL(geni_se_rx_dma_unprep);
int geni_icc_get(struct geni_se *se, const char *icc_ddr)
{
- int i, err;
- const char *icc_names[] = {"qup-core", "qup-config", icc_ddr};
+ struct geni_icc_path *icc_paths = se->icc_paths;
if (has_acpi_companion(se->dev))
return 0;
- for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) {
- if (!icc_names[i])
- continue;
-
- se->icc_paths[i].path = devm_of_icc_get(se->dev, icc_names[i]);
- if (IS_ERR(se->icc_paths[i].path))
- goto err;
+ icc_paths[GENI_TO_CORE].path = devm_of_icc_get(se->dev, "qup-core");
+ if (IS_ERR(icc_paths[GENI_TO_CORE].path))
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_CORE].path),
+ "Failed to get 'qup-core' ICC path\n");
+
+ icc_paths[CPU_TO_GENI].path = devm_of_icc_get(se->dev, "qup-config");
+ if (IS_ERR(icc_paths[CPU_TO_GENI].path))
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[CPU_TO_GENI].path),
+ "Failed to get 'qup-config' ICC path\n");
+
+ /* The DDR path is optional, depending on protocol and hw capabilities */
+ icc_paths[GENI_TO_DDR].path = devm_of_icc_get(se->dev, "qup-memory");
+ if (IS_ERR(icc_paths[GENI_TO_DDR].path)) {
+ if (PTR_ERR(icc_paths[GENI_TO_DDR].path) == -ENODATA)
+ icc_paths[GENI_TO_DDR].path = NULL;
+ else
+ return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_DDR].path),
+ "Failed to get 'qup-memory' ICC path\n");
}
return 0;
-
-err:
- err = PTR_ERR(se->icc_paths[i].path);
- if (err != -EPROBE_DEFER)
- dev_err_ratelimited(se->dev, "Failed to get ICC path '%s': %d\n",
- icc_names[i], err);
- return err;
-
}
EXPORT_SYMBOL_GPL(geni_icc_get);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:11 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Add a new function geni_icc_set_bw_ab() that allows callers to set
average bandwidth values for all ICC (Interconnect) paths in a single
call. This function takes separate parameters for core, config, and DDR
average bandwidth values and applies them to the respective ICC paths.
This provides a more convenient API for drivers that need to configure
specific average bandwidth values.
Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 22 ++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 1 +
2 files changed, 23 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b6167b968ef6..b0542f836453 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -946,6 +946,28 @@ int geni_icc_set_bw(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_set_bw);
+/**
+ * geni_icc_set_bw_ab() - Set average bandwidth for all ICC paths and apply
+ * @se: Pointer to the concerned serial engine.
+ * @core_ab: Average bandwidth in kBps for GENI_TO_CORE path.
+ * @cfg_ab: Average bandwidth in kBps for CPU_TO_GENI path.
+ * @ddr_ab: Average bandwidth in kBps for GENI_TO_DDR path.
+ *
+ * Sets bandwidth values for all ICC paths and applies them. DDR path is
+ * optional and only set if it exists.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab)
+{
+ se->icc_paths[GENI_TO_CORE].avg_bw = core_ab;
+ se->icc_paths[CPU_TO_GENI].avg_bw = cfg_ab;
+ se->icc_paths[GENI_TO_DDR].avg_bw = ddr_ab;
+
+ return geni_icc_set_bw(se);
+}
+EXPORT_SYMBOL_GPL(geni_icc_set_bw_ab);
+
void geni_icc_set_tag(struct geni_se *se, u32 tag)
{
int i;
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 0a984e2579fe..980aabea2157 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -528,6 +528,7 @@ void geni_se_rx_dma_unprep(struct geni_se *se, dma_addr_t iova, size_t len);
int geni_icc_get(struct geni_se *se, const char *icc_ddr);
int geni_icc_set_bw(struct geni_se *se);
+int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab);
void geni_icc_set_tag(struct geni_se *se, u32 tag);
int geni_icc_enable(struct geni_se *se);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:12 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently duplicate
code for initializing shared resources such as clocks and interconnect
paths.
Introduce a new helper API, geni_se_resources_init(), to centralize this
initialization logic, improving modularity and simplifying the probe
function.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1 -> v2:
- Updated proper return value for devm_pm_opp_set_clkname()
---
drivers/soc/qcom/qcom-geni-se.c | 47 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 6 ++++
2 files changed, 53 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b0542f836453..75e722cd1a94 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
/**
@@ -1012,6 +1013,52 @@ int geni_icc_disable(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_disable);
+/**
+ * geni_se_resources_init() - Initialize resources for a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function initializes various resources required by the GENI Serial Engine
+ * (SE) device, including clock resources (core and SE clocks), interconnect
+ * paths for communication.
+ * It retrieves optional and mandatory clock resources, adds an OF-based
+ * operating performance point (OPP) table, and sets up interconnect paths
+ * with default bandwidths. The function also sets a flag (`has_opp`) to
+ * indicate whether OPP support is available for the device.
+ *
+ * Return: 0 on success, or a negative errno on failure.
+ */
+int geni_se_resources_init(struct geni_se *se)
+{
+ int ret;
+
+ se->core_clk = devm_clk_get_optional(se->dev, "core");
+ if (IS_ERR(se->core_clk))
+ return dev_err_probe(se->dev, PTR_ERR(se->core_clk),
+ "Failed to get optional core clk\n");
+
+ se->clk = devm_clk_get(se->dev, "se");
+ if (IS_ERR(se->clk) && !has_acpi_companion(se->dev))
+ return dev_err_probe(se->dev, PTR_ERR(se->clk),
+ "Failed to get SE clk\n");
+
+ ret = devm_pm_opp_set_clkname(se->dev, "se");
+ if (ret)
+ return ret;
+
+ ret = devm_pm_opp_of_add_table(se->dev);
+ if (ret && ret != -ENODEV)
+ return dev_err_probe(se->dev, ret, "Failed to add OPP table\n");
+
+ se->has_opp = (ret == 0);
+
+ ret = geni_icc_get(se, "qup-memory");
+ if (ret)
+ return ret;
+
+ return geni_icc_set_bw_ab(se, GENI_DEFAULT_BW, GENI_DEFAULT_BW, GENI_DEFAULT_BW);
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_init);
+
/**
* geni_find_protocol_fw() - Locate and validate SE firmware for a protocol.
* @dev: Pointer to the device structure.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 980aabea2157..c182dd0f0bde 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -60,18 +60,22 @@ struct geni_icc_path {
* @dev: Pointer to the Serial Engine device
* @wrapper: Pointer to the parent QUP Wrapper core
* @clk: Handle to the core serial engine clock
+ * @core_clk: Auxiliary clock, which may be required by a protocol
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @has_opp: Indicates if OPP is supported
*/
struct geni_se {
void __iomem *base;
struct device *dev;
struct geni_wrapper *wrapper;
struct clk *clk;
+ struct clk *core_clk;
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ bool has_opp;
};
/* Common SE registers */
@@ -535,6 +539,8 @@ int geni_icc_enable(struct geni_se *se);
int geni_icc_disable(struct geni_se *se);
+int geni_se_resources_init(struct geni_se *se);
+
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:13 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI SE protocol drivers (I2C, SPI, UART) implement similar resource
activation/deactivation sequences independently, leading to code
duplication.
Introduce geni_se_resources_activate()/geni_se_resources_deactivate() to
power on/off resources.The activate function enables ICC, clocks, and TLMM
whereas the deactivate function disables resources in reverse order
including OPP rate reset, clocks, ICC and TLMM.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v2 -> v3
- Added export symbol for new APIs.
v1 -> v2
Bjorn
- Updated commit message based code changes.
- Removed geni_se_resource_state() API.
- Utilized code snippet from geni_se_resources_off()
---
drivers/soc/qcom/qcom-geni-se.c | 79 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++
2 files changed, 83 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 75e722cd1a94..3341bc98df09 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -1013,6 +1013,85 @@ int geni_icc_disable(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_icc_disable);
+/**
+ * geni_se_resources_deactivate() - Deactivate GENI SE device resources
+ * @se: Pointer to the geni_se structure
+ *
+ * Deactivates device resources for power saving: OPP rate to 0, pin control
+ * to sleep state, turns off clocks, and disables interconnect. Skips ACPI devices.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int geni_se_resources_deactivate(struct geni_se *se)
+{
+ int ret;
+
+ if (has_acpi_companion(se->dev))
+ return 0;
+
+ if (se->has_opp)
+ dev_pm_opp_set_rate(se->dev, 0);
+
+ ret = pinctrl_pm_select_sleep_state(se->dev);
+ if (ret)
+ return ret;
+
+ geni_se_clks_off(se);
+
+ if (se->core_clk)
+ clk_disable_unprepare(se->core_clk);
+
+ return geni_icc_disable(se);
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_deactivate);
+
+/**
+ * geni_se_resources_activate() - Activate GENI SE device resources
+ * @se: Pointer to the geni_se structure
+ *
+ * Activates device resources for operation: enables interconnect, prepares clocks,
+ * and sets pin control to default state. Includes error cleanup. Skips ACPI devices.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int geni_se_resources_activate(struct geni_se *se)
+{
+ int ret;
+
+ if (has_acpi_companion(se->dev))
+ return 0;
+
+ ret = geni_icc_enable(se);
+ if (ret)
+ return ret;
+
+ if (se->core_clk) {
+ ret = clk_prepare_enable(se->core_clk);
+ if (ret)
+ goto out_icc_disable;
+ }
+
+ ret = geni_se_clks_on(se);
+ if (ret)
+ goto out_clk_disable;
+
+ ret = pinctrl_pm_select_default_state(se->dev);
+ if (ret) {
+ geni_se_clks_off(se);
+ goto out_clk_disable;
+ }
+
+ return ret;
+
+out_clk_disable:
+ if (se->core_clk)
+ clk_disable_unprepare(se->core_clk);
+out_icc_disable:
+ geni_icc_disable(se);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index c182dd0f0bde..36a68149345c 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -541,6 +541,10 @@ int geni_icc_disable(struct geni_se *se);
int geni_se_resources_init(struct geni_se *se);
+int geni_se_resources_activate(struct geni_se *se);
+
+int geni_se_resources_deactivate(struct geni_se *se);
+
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:14 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently handle
the attachment of power domains. This often leads to duplicated code
logic across different driver probe functions.
Introduce a new helper API, geni_se_domain_attach(), to centralize
the logic for attaching "power" and "perf" domains to the GENI SE
device.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 29 +++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++++
2 files changed, 33 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 3341bc98df09..b8e5066d4881 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
@@ -1092,6 +1093,34 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_domain_attach() - Attach power domains to a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function attaches the necessary power domains ("power" and "perf")
+ * to the GENI Serial Engine device. It initializes `se->pd_list` with the
+ * attached domains.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_domain_attach(struct geni_se *se)
+{
+ struct dev_pm_domain_attach_data pd_data = {
+ .pd_flags = PD_FLAG_DEV_LINK_ON,
+ .pd_names = (const char*[]) { "power", "perf" },
+ .num_pd_names = 2,
+ };
+ int ret;
+
+ ret = dev_pm_domain_attach_list(se->dev,
+ &pd_data, &se->pd_list);
+ if (ret <= 0)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(geni_se_domain_attach);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 36a68149345c..5f75159c5531 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -64,6 +64,7 @@ struct geni_icc_path {
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @pd_list: Power domain list for managing power domains
* @has_opp: Indicates if OPP is supported
*/
struct geni_se {
@@ -75,6 +76,7 @@ struct geni_se {
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ struct dev_pm_domain_list *pd_list;
bool has_opp;
};
@@ -546,5 +548,7 @@ int geni_se_resources_activate(struct geni_se *se);
int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
+
+int geni_se_domain_attach(struct geni_se *se);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:15 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
The GENI Serial Engine (SE) drivers (I2C, SPI, and SERIAL) currently
manage performance levels and operating points directly. This resulting
in code duplication across drivers. such as configuring a specific level
or find and apply an OPP based on a clock frequency.
Introduce two new helper APIs, geni_se_set_perf_level() and
geni_se_set_perf_opp(), addresses this issue by providing a streamlined
method for the GENI Serial Engine (SE) drivers to find and set the OPP
based on the desired performance level, thereby eliminating redundancy.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 50 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 +++
2 files changed, 54 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index b8e5066d4881..dc5f5bb52915 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -282,6 +282,12 @@ struct se_fw_hdr {
#define geni_setbits32(_addr, _v) writel(readl(_addr) | (_v), _addr)
#define geni_clrbits32(_addr, _v) writel(readl(_addr) & ~(_v), _addr)
+enum domain_idx {
+ DOMAIN_IDX_POWER,
+ DOMAIN_IDX_PERF,
+ DOMAIN_IDX_MAX
+};
+
/**
* geni_se_get_qup_hw_version() - Read the QUP wrapper Hardware version
* @se: Pointer to the corresponding serial engine.
@@ -1093,6 +1099,50 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_set_perf_level() - Set performance level for GENI SE.
+ * @se: Pointer to the struct geni_se instance.
+ * @level: The desired performance level.
+ *
+ * Sets the performance level by directly calling dev_pm_opp_set_level
+ * on the performance device associated with the SE.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level)
+{
+ return dev_pm_opp_set_level(se->pd_list->pd_devs[DOMAIN_IDX_PERF], level);
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_level);
+
+/**
+ * geni_se_set_perf_opp() - Set performance OPP for GENI SE by frequency.
+ * @se: Pointer to the struct geni_se instance.
+ * @clk_freq: The requested clock frequency.
+ *
+ * Finds the nearest operating performance point (OPP) for the given
+ * clock frequency and applies it to the SE's performance device.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq)
+{
+ struct device *perf_dev = se->pd_list->pd_devs[DOMAIN_IDX_PERF];
+ struct dev_pm_opp *opp;
+ int ret;
+
+ opp = dev_pm_opp_find_freq_floor(perf_dev, &clk_freq);
+ if (IS_ERR(opp)) {
+ dev_err(se->dev, "failed to find opp for freq %lu\n", clk_freq);
+ return PTR_ERR(opp);
+ }
+
+ ret = dev_pm_opp_set_opp(perf_dev, opp);
+ dev_pm_opp_put(opp);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_opp);
+
/**
* geni_se_domain_attach() - Attach power domains to a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 5f75159c5531..c5e6ab85df09 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -550,5 +550,9 @@ int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
int geni_se_domain_attach(struct geni_se *se);
+
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level);
+
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:16 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Add DT bindings for the QUP GENI I2C controller on sa8255p platforms.
SA8255p platform abstracts resources such as clocks, interconnect and
GPIO pins configuration in Firmware. SCMI power and perf protocol
are utilized to request resource configurations.
SA8255p platform does not require the Serial Engine (SE) common properties
as the SE firmware is loaded and managed by the TrustZone (TZ) secure
environment.
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>
Co-developed-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v2->v3:
- Added Reviewed-by tag
v1->v2:
Krzysztof:
- Added dma properties in example node
- Removed minItems from power-domains property
- Added in commit text about common property
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
diff --git a/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
new file mode 100644
index 000000000000..a61e40b5cbc1
--- /dev/null
+++ b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/i2c/qcom,sa8255p-geni-i2c.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm SA8255p QUP GENI I2C Controller
+
+maintainers:
+ - Praveen Talari <praveen.talari@oss.qualcomm.com>
+
+properties:
+ compatible:
+ const: qcom,sa8255p-geni-i2c
+
+ reg:
+ maxItems: 1
+
+ dmas:
+ maxItems: 2
+
+ dma-names:
+ items:
+ - const: tx
+ - const: rx
+
+ interrupts:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 2
+
+ power-domain-names:
+ items:
+ - const: power
+ - const: perf
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - power-domains
+
+allOf:
+ - $ref: /schemas/i2c/i2c-controller.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/dma/qcom-gpi.h>
+
+ i2c@a90000 {
+ compatible = "qcom,sa8255p-geni-i2c";
+ reg = <0xa90000 0x4000>;
+ interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&gpi_dma0 0 0 QCOM_GPI_I2C>,
+ <&gpi_dma0 1 0 QCOM_GPI_I2C>;
+ dma-names = "tx", "rx";
+ power-domains = <&scmi0_pd 0>, <&scmi0_dvfs 0>;
+ power-domain-names = "power", "perf";
+ };
+...
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:17 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Refactor the resource initialization in geni_i2c_probe() by introducing
a new geni_i2c_resources_init() function and utilizing the common
geni_se_resources_init() framework and clock frequency mapping, making the
probe function cleaner.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 53 ++++++++++++------------------
1 file changed, 21 insertions(+), 32 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 58c32ffbd150..a4b13022e508 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1042,6 +1042,23 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+{
+ int ret;
+
+ ret = geni_se_resources_init(&gi2c->se);
+ if (ret)
+ return ret;
+
+ ret = geni_i2c_clk_map_idx(gi2c);
+ if (ret)
+ return dev_err_probe(gi2c->se.dev, ret, "Invalid clk frequency %d Hz\n",
+ gi2c->clk_freq_out);
+
+ return geni_icc_set_bw_ab(&gi2c->se, GENI_DEFAULT_BW, GENI_DEFAULT_BW,
+ Bps_to_icc(gi2c->clk_freq_out));
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
@@ -1061,16 +1078,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
desc = device_get_match_data(&pdev->dev);
- if (desc && desc->has_core_clk) {
- gi2c->core_clk = devm_clk_get(dev, "core");
- if (IS_ERR(gi2c->core_clk))
- return PTR_ERR(gi2c->core_clk);
- }
-
- gi2c->se.clk = devm_clk_get(dev, "se");
- if (IS_ERR(gi2c->se.clk) && !has_acpi_companion(dev))
- return PTR_ERR(gi2c->se.clk);
-
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
if (ret) {
@@ -1085,16 +1092,15 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (gi2c->irq < 0)
return gi2c->irq;
- ret = geni_i2c_clk_map_idx(gi2c);
- if (ret)
- return dev_err_probe(dev, ret, "Invalid clk frequency %d Hz\n",
- gi2c->clk_freq_out);
-
gi2c->adap.algo = &geni_i2c_algo;
init_completion(&gi2c->done);
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
+ ret = geni_i2c_resources_init(gi2c);
+ if (ret)
+ return ret;
+
/* Keep interrupts disabled initially to allow for low-power modes */
ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN,
dev_name(dev), gi2c);
@@ -1107,23 +1113,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
gi2c->adap.dev.of_node = dev->of_node;
strscpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name));
- ret = geni_icc_get(&gi2c->se, desc ? desc->icc_ddr : "qup-memory");
- if (ret)
- return ret;
- /*
- * Set the bus quota for core and cpu to a reasonable value for
- * register access.
- * Set quota for DDR based on bus speed.
- */
- gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW;
- gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
- if (!desc || desc->icc_ddr)
- gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out);
-
- ret = geni_icc_set_bw(&gi2c->se);
- if (ret)
- return ret;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:19 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v3 00/12] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (12):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 300 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 263 ++++++++++++++-
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 475 insertions(+), 171 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: f417b7ffcbef7d76b0d8860518f50dae0e7e5eda
--
2.34.1
|
Moving the serial engine setup to geni_i2c_init() API for a cleaner
probe function and utilizes the PM runtime API to control resources
instead of direct clock-related APIs for better resource management.
Enables reusability of the serial engine initialization like
hibernation and deep sleep features where hardware context is lost.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v1->v2:
Bjorn:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 154 ++++++++++++++---------------
1 file changed, 73 insertions(+), 81 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 3a04016db2c3..58c32ffbd150 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -976,10 +976,75 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_init(struct geni_i2c_dev *gi2c)
+{
+ const struct geni_i2c_desc *desc = NULL;
+ u32 proto, tx_depth;
+ bool fifo_disable;
+ int ret;
+
+ ret = pm_runtime_resume_and_get(gi2c->se.dev);
+ if (ret < 0) {
+ dev_err(gi2c->se.dev, "error turning on device :%d\n", ret);
+ return ret;
+ }
+
+ proto = geni_se_read_proto(&gi2c->se);
+ if (proto == GENI_SE_INVALID_PROTO) {
+ ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
+ if (ret) {
+ dev_err_probe(gi2c->se.dev, ret, "i2c firmware load failed ret: %d\n", ret);
+ goto err;
+ }
+ } else if (proto != GENI_SE_I2C) {
+ ret = dev_err_probe(gi2c->se.dev, -ENXIO, "Invalid proto %d\n", proto);
+ goto err;
+ }
+
+ desc = device_get_match_data(gi2c->se.dev);
+ if (desc && desc->no_dma_support)
+ fifo_disable = false;
+ else
+ fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
+
+ if (fifo_disable) {
+ /* FIFO is disabled, so we can only use GPI DMA */
+ gi2c->gpi_mode = true;
+ ret = setup_gpi_dma(gi2c);
+ if (ret)
+ goto err;
+
+ dev_dbg(gi2c->se.dev, "Using GPI DMA mode for I2C\n");
+ } else {
+ gi2c->gpi_mode = false;
+ tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
+
+ /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
+ if (!tx_depth && desc)
+ tx_depth = desc->tx_fifo_depth;
+
+ if (!tx_depth) {
+ ret = dev_err_probe(gi2c->se.dev, -EINVAL,
+ "Invalid TX FIFO depth\n");
+ goto err;
+ }
+
+ gi2c->tx_wm = tx_depth - 1;
+ geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
+ geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
+ PACKING_BYTES_PW, true, true, true);
+
+ dev_dbg(gi2c->se.dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
+ }
+
+err:
+ pm_runtime_put(gi2c->se.dev);
+ return ret;
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
- u32 proto, tx_depth, fifo_disable;
int ret;
struct device *dev = &pdev->dev;
const struct geni_i2c_desc *desc = NULL;
@@ -1059,100 +1124,27 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- return ret;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning on resources\n");
- goto err_clk;
- }
- proto = geni_se_read_proto(&gi2c->se);
- if (proto == GENI_SE_INVALID_PROTO) {
- ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
- if (ret) {
- dev_err_probe(dev, ret, "i2c firmware load failed ret: %d\n", ret);
- goto err_resources;
- }
- } else if (proto != GENI_SE_I2C) {
- ret = dev_err_probe(dev, -ENXIO, "Invalid proto %d\n", proto);
- goto err_resources;
- }
-
- if (desc && desc->no_dma_support)
- fifo_disable = false;
- else
- fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
-
- if (fifo_disable) {
- /* FIFO is disabled, so we can only use GPI DMA */
- gi2c->gpi_mode = true;
- ret = setup_gpi_dma(gi2c);
- if (ret)
- goto err_resources;
-
- dev_dbg(dev, "Using GPI DMA mode for I2C\n");
- } else {
- gi2c->gpi_mode = false;
- tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
-
- /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
-
- if (!tx_depth) {
- ret = dev_err_probe(dev, -EINVAL,
- "Invalid TX FIFO depth\n");
- goto err_resources;
- }
-
- gi2c->tx_wm = tx_depth - 1;
- geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
- geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
- PACKING_BYTES_PW, true, true, true);
-
- dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
- }
-
- clk_disable_unprepare(gi2c->core_clk);
- ret = geni_se_resources_off(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning off resources\n");
- goto err_dma;
- }
-
- ret = geni_icc_disable(&gi2c->se);
- if (ret)
- goto err_dma;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
pm_runtime_use_autosuspend(gi2c->se.dev);
pm_runtime_enable(gi2c->se.dev);
+ ret = geni_i2c_init(gi2c);
+ if (ret < 0) {
+ pm_runtime_disable(gi2c->se.dev);
+ return ret;
+ }
+
ret = i2c_add_adapter(&gi2c->adap);
if (ret) {
dev_err_probe(dev, ret, "Error adding i2c adapter\n");
pm_runtime_disable(gi2c->se.dev);
- goto err_dma;
+ return ret;
}
dev_dbg(dev, "Geni-I2C adaptor successfully added\n");
- return ret;
-
-err_resources:
- geni_se_resources_off(&gi2c->se);
-err_clk:
- clk_disable_unprepare(gi2c->core_clk);
-
- return ret;
-
-err_dma:
- release_gpi_dma(gi2c);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 12 Jan 2026 16:17:18 +0530",
"thread_id": "61ef66ac-3919-48e3-a78e-eef54001ae6f@oss.qualcomm.com.mbox.gz"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.