source
large_stringclasses
2 values
subject
large_stringclasses
112 values
code
large_stringclasses
112 values
critique
large_stringlengths
61
3.04M
metadata
dict
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:24PM -0800, Mukesh R wrote: <snip> Shouldn't the inner struct be packed instead? Why should the union be packed? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 14:17:24 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:25PM -0800, Mukesh R wrote: Why not moving this static function definition above the called instead of defining the prototype? This variable is redundant. It looks like this value always fits into 32 bit, so what is the value in returning 64 bit? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 14:22:23 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:26PM -0800, Mukesh R wrote: Should this one be renamed into hv_build_devid_type_vmbus() to align with the other two function names? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 14:27:33 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:27PM -0800, Mukesh R wrote: <snip> The return above is never reached. hvdom varaible is redundant. Why goto here and not return NULL like above? Shouldn't this be modified iff the detach succeeded? There is some inconsistency in namings and behaviour of paired functions: 1. The pair of hv_iommu_unmap_pages is called hv_iommu_map_pgs 2. hv_iommu_map_pgs doesn't print status in case of error. It would be much better to keep this code consistent. hv_def_identity_dom is a static global variable. Why not initialize hv_def_identity_dom upon definition instead of introducing a new function? It looks weird to initialize an object after creating sysfs entries for it. It should be the other way around. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 16:12:42 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:28PM -0800, Mukesh R wrote: l1vh and root are mutually exclusive partitions. If you wanted to highlight that it's l1vh itself and not its child guest, then "l1vh parent" term would do. Looks like the only special case is for attached logical devices, otherwise hv_current_partition_id is used. Can the logic simplified here? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 16:47:07 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, kernel test robot noticed the following build warnings: [auto build test WARNING on tip/x86/core] [also build test WARNING on pci/next pci/for-linus arm64/for-next/core clk/clk-next soc/for-next linus/master v6.19-rc6 next-20260120] [cannot apply to arnd-asm-generic/master] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Mukesh-R/iommu-hyperv-rename-hyperv-iommu-c-to-hyperv-irq-c/20260120-145832 base: tip/x86/core patch link: https://lore.kernel.org/r/20260120064230.3602565-4-mrathor%40linux.microsoft.com patch subject: [PATCH v0 03/15] x86/hyperv: add insufficient memory support in irqdomain.c config: i386-randconfig-053-20260120 (https://download.01.org/0day-ci/archive/20260121/202601210731.f1WLdgcO-lkp@intel.com/config) compiler: gcc-14 (Debian 14.2.0-19) 14.2.0 If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202601210731.f1WLdgcO-lkp@intel.com/ cocci warnings: (new ones prefixed by >>) vim +90 arch/x86/hyperv/irqdomain.c 72 73 static int hv_map_interrupt(u64 ptid, union hv_device_id device_id, bool level, 74 int cpu, int vector, 75 struct hv_interrupt_entry *ret_entry) 76 { 77 u64 status; 78 int rc, deposit_pgs = 16; /* don't loop forever */ 79 80 while (deposit_pgs--) { 81 status = hv_map_interrupt_hcall(ptid, device_id, level, cpu, 82 vector, ret_entry); 83 84 if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) 85 break; 86 87 rc = hv_call_deposit_pages(NUMA_NO_NODE, ptid, 1); 88 if (rc) 89 break; > 90 }; 91 92 if (!hv_result_success(status)) 93 hv_status_err(status, "\n"); 94 95 return hv_result_to_errno(status); 96 } 97 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
{ "author": "kernel test robot <lkp@intel.com>", "date": "Wed, 21 Jan 2026 08:53:50 +0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:29PM -0800, Mukesh R wrote: No need updating ret here: it's 0 after the previous call. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 17:41:39 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:30PM -0800, Mukesh R wrote: Semaphore can't be taken under spinlock. Get it instead. Why this check is needed again? The region type is stored on the region itself. And the type is checked on the caller side. This call needs to be protected by the spinlock. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 17:53:29 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, On Mon, 19 Jan 2026 22:42:27 -0800 Mukesh R <mrathor@linux.microsoft.com> wrote: Given we are also developing a guest iommu driver on hyperv, I think it is more clear to name them accordingly. Perhaps, hyperv-iommu-root.c? Will this driver be x86 only? DMA and IRQ remapping should be separate I don't think this is needed since this driver cannot be a module should not depend on Intel's DMAR rename to num_dev_attached? This function is not iommu specific. Maybe move it to mshv code? Attached domain can change anytime, what guarantee does the caller have? nit: use consistent coding style, inverse Christmas tree. why is this an error if user input choose not to do direct attach? This is true only when unique_id wraps around, right? Then this driver stops working? can you use an IDR for the unique_id and free it as you detach instead of doing this cyclic allocation? you could free the domid here, no? The IOMMU driver should be agnostic to the type of consumer, whether a userspace driver or a VM. This comment is not necessary. This does not match upstream kernel prototype, which kernel version is this based on? I will stop here for now. struct iommu_domain_ops { int (*attach_dev)(struct iommu_domain *domain, struct device *dev); This could be initialized statically.
{ "author": "Jacob Pan <jacob.pan@linux.microsoft.com>", "date": "Wed, 21 Jan 2026 21:18:06 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/19/2026 10:42 PM, Mukesh R wrote: This should go inside the #if IS_ENABLED(CONFIG_MSHV_ROOT) section.
{ "author": "Nuno Das Neves <nunodasneves@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 10:23:04 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/19/2026 10:42 PM, Mukesh R wrote: With this commit, the IOCTL number is exposed to userspace but it doesn't work. Ideally the IOCTL number should be added in the commit where it becomes usable.
{ "author": "Nuno Das Neves <nunodasneves@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 10:25:36 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/19/2026 10:42 PM, Mukesh R wrote: Since the code is very similar to Wei's original commit, the way I'd recommend to do it is: 1. Change the commit author to Wei, using git commit --amend --author= and 2. Put his signed-off line before yours: Signed-off-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> This shows he is the author of the commit but you ported it. If you feel you changed it enough that it should be considered co-authored, you can instead keep your authorship of the commit and put: Co-developed-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com>
{ "author": "Nuno Das Neves <nunodasneves@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 10:32:09 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/19/2026 10:42 PM, Mukesh R wrote: What is the reason for having this as a separate commit from patch 15? It seems like removing this code and adding the mmio intercept handling could be done in one patch.
{ "author": "Nuno Das Neves <nunodasneves@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 10:34:55 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/23/26 10:25, Nuno Das Neves wrote: Correct, I switched it because the next patch won't compile without it as it needs the declarations here. It could be combined into one big patch, but I think normally one would not expect full functionality until the release is certified to be that feature compliant anyways. Hope that makes sense. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 16:36:28 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/23/26 10:32, Nuno Das Neves wrote: Perfect! Thank you, that is exactly the information I was trying to seek... makes sense. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 16:37:34 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 14:17, Stanislav Kinsburskii wrote: From GCC docs: Specifying this attribute for struct and union types is equivalent to specifying the packed attribute on each of the structure or union members. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 16:38:56 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 14:22, Stanislav Kinsburskii wrote: Did you see the function implementation? It has other dependencies that are later, it would need code reorg. Thanks, -Mukesh Not really. It helps with debug by putting a quick print, and is harmless. The ABI has device id defined as 64bits where this is assigned. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 16:42:54 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 14:27, Stanislav Kinsburskii wrote: No, because hyperv only defines two types of device ids, and it would unnecessary at to confusion. vmbus uses one the two types of device ids.
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 16:44:01 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 16:12, Stanislav Kinsburskii wrote: Some debug code there got removed. Will fix in next version. We want to still free the domain and not let it get stuck. The purpose is more to make sure detach was called before domain free. The pair of hv_iommu_unmap_pages is hv_iommu_map_pages right above. hv_iommu_map_pgs could be renamed to hv_iommu_map_pgs_hcall I suppose. it does: hv_status_err(status, "\n"); <============== Originally, it was function. I changed it static, but during 6.6 review I changed it back to function. I can't remember why, but is pretty harmless. We may add more domains, for example null domain to the initilization in future. Not sure if it should be, much easier to remove sysfs entry than other cleanup, even tho iommu_device_unregister is there. I am sure we'll add more code here, probably why it was originally done this way. Thanks, -Mukesh ... snip........
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 17:26:19 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/21/26 21:18, Jacob Pan wrote: well, l1vh is not quite root, more like a parent. But we've been using l1vh root loosely to mean l1vh parent. so probably ok to rename it to hyperv-iommu-root.c. I prefer not calling it parent or something like that. j Yes for now. not sure i follow. Well, it is getting the information from mshv by calling a function there for iommu, and is not needed if no HYPER_IOMMU. So this is probably the best place for it. Not sure I understand what can change: the device moving from attached to non-attached? or the domain getting deleted? In any case, this is called from leaf functions, so that should not happen... and it will return false if the device did somehow got removed. Like the error message says: on l1vh, direct attaches of host devices (eg dpdk) is not supported. and l1vh only does direct attaches. IOW, no host devices on l1vh. Correct. It's a u32, so if my math is right, and a device is attached every second, it will take 136 years to wrap! Did i get that right? sorry, don't follow what you mean by domid, you mean unique_id? As I mentioned in the cover letter: Based on: 8f0b4cce4481 (origin/hyperv-next) which is now 6.19 based. I think you got it backwards, 6.6 has this. 6.19 has extra paremeter. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 18:01:29 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 16:47, Stanislav Kinsburskii wrote: We've been loosely using "l1vh root" to mean "privilated l1vh" as opposed to l1vh guests. I think that is fine. l1vh parent is confusing, as it may also refer to l1vh parent, which would be the host. so as long as the context is clear, we are ok. Could be, but at the cost of clear upfront clarity. this nicely tells the reader that a logical ID has different cases, where as PCI does not. End instructions are the same. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 18:08:49 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/23/26 10:34, Nuno Das Neves wrote: Just ease of review and porting patches from this branch to that branch to that release to this release... I am sure someone would have asked for this to be a separate patch :). Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 18:12:14 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 17:53, Stanislav Kinsburskii wrote: Yeah, something didn't feel right here and I meant to recheck, now regret rushing to submit the patch. Rethinking, I think the pt_mem_regions_lock is not needed to protect the uaddr because unmap will properly serialize via the mm lock. To make sure region did not change. This check is under lock. This is sorta fast path to bail. We recheck under partition lock above. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 18:19:15 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/20/26 13:50, Jacob Pan wrote: Ok, i can add something, but l1vh was very well introduced if you search the mshv commits for "l1vh". sure. Yeah, I was hoping we can get by for now without it. At least in case of the cloud hypervisor, entire guest ram is mapped anyways. We can document it and work on enhancements which are much easier once we have a baseline. For now, it's a paging domain will all pages pinned.. :).
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 23 Jan 2026 18:27:34 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 23, 2026 at 05:26:19PM -0800, Mukesh R wrote: <snip> How can one debug subseqent errors if num_attchd is decremented unconditionally? In reality the device is left attached, but the related kernel metadata is gone. Hv_iommu_map_pages is a wrapper around hv_iommu_map_pgs while hv_iommu_unmap_pages is a wrapper around the correspodning hypercall. That's the inconsistency I meant. It does not. I guess you are confusing it with some other function. Here is the function: Sysfs provides user space access to kernel objects. If the object is not initialized, it's not only a useless sysfs entry, but also a potential cause for kernel panic if user space will try to access this entry before the object is initialized. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 26 Jan 2026 07:57:35 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 23, 2026 at 06:19:15PM -0800, Mukesh R wrote: How can this happen? One can't change VMA type without unmapping it first. And unmapping it leads to a kernel MMIO region state dangling around without corresponding user space mapping. This is similar to dangling pinned regions and should likely be addressed the same way by utilizing MMU notifiers to destpoy memoty regions is VMA is detached. Accessing the list of regions without lock is unsafe. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 26 Jan 2026 10:15:53 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 23, 2026 at 04:42:54PM -0800, Mukesh R wrote: Why not placing the caller side after the function definition then? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 26 Jan 2026 12:50:03 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/26/26 07:57, Stanislav Kinsburskii wrote: Error is printed in case of failed detach. If there is panic, at least you can get some info about the device. Metadata in hypervisor is around if failed. Unmap does not need intermediate function. We print error upon its failure in hv_iommu_map_pages(): if (!hv_result_success(status)) { size_t done_size = done << HV_HYP_PAGE_SHIFT; hv_status_err(status, "pgs:%lx/%lx iova:%lx\n", done, npages, iova); I hear you... but, o there is nothing under sysfs to be accessed when created o it is during boot o it should almost never fail... o iommu_device_sysfs_remove is much more light weight than iommu_device_unregister o i expect more to be added there as we enhance it Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 26 Jan 2026 19:02:29 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/26/26 10:15, Stanislav Kinsburskii wrote: Right, and vm_flags would not be mmio expected then. I don't think we need that. Either it succeeds if the region did not change at all, or just fails. I am not sure why? This check is done by a vcpu thread, so regions will not have just gone away. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 26 Jan 2026 19:07:22 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 26, 2026 at 07:02:29PM -0800, Mukesh R wrote: With this approach the only thing left is a kernel message. But if the state is kept intact, one could collect a kernel core and analyze it. And note, that there won't be a hypervisor core by default: our main context with the usptreamed version of the driver is L1VH and a kernel core is the only thing a third party customer can provide for our analysis. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 27 Jan 2026 10:46:49 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 26, 2026 at 07:07:22PM -0800, Mukesh R wrote: I'm afraid we do, as if the driver mapped a page with the previous memory region, and then the region is unmapped, the page will stay mapped in the hypervisor, but will be considered free by kernel, which in turn will lead to GPF upn next allocation. With pinned regions we issue is similar but less impacting: pages can't be released by user space unmapping and thus will be simply leaked, but the system stays intact. MMIO regions are simila to movable region in this regard: they don't reference the user pages, and thus this guest region replaement is a stright wat to kernel panic. This is shared resources. Multiple VP thread get into this function simultaneously, so there is a race already. But this one we can live with without locking as they don't mutate the list of the regions. The issue happens when VMM adds or removed another region as it mutates the list and races with VP threads doing this lookup. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 27 Jan 2026 10:57:08 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, On Fri, 23 Jan 2026 18:01:29 -0800 Mukesh R <mrathor@linux.microsoft.com> wrote: yeah, something specific and different than the guest driver will do. If there is nothing x86 specific in this driver (assuming the hypercalls here are not x86 only), maybe you can move to the generic startup code. In IOMMU subsystem, DMA remapping and IRQ remapping can be turned on/off independently. e.g. you could have an option to turn on IRQ remapping w/o DMA remapping. But here you tied them together. ok, maybe move it to mshv after we have a second user. But the function name can be just hv_get_curr_partid(void), no? I was thinking the device can be attached to a different domain type at runtime, e.g. via sysfs to identity or DMA. But I guess here is a static attachment either for l1vh or root. This hv_no_attdev flag is really confusing to me, by default hv_no_attdev is false, which allows direct attach. And you are saying l1vh allows it. Why is this flag also controls host device attachment in l1vh? If you can tell the difference between direct host device attach and other direct attach, why don't you reject always reject host attach in l1vh? This is still a unnecessary vulnerability. yes. where is this repo? you are right, this is a very recent change. my bad.
{ "author": "Jacob Pan <jacob.pan@linux.microsoft.com>", "date": "Tue, 27 Jan 2026 11:21:44 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, On second thought, if the hv_no_attdev knob is only meant to control host domain attach vs. direct attach, then it is irrelevant on L1VH. Would it make more sense to rename this to something like hv_host_disable_direct_attach? That would better reflect its scope and allow it to be ignored under L1VH, and reduce the risk of users misinterpreting or misusing it.
{ "author": "Jacob Pan <jacob.pan@linux.microsoft.com>", "date": "Tue, 27 Jan 2026 14:31:19 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:23PM -0800, Mukesh R wrote: Don't mix up cleanup changes. Do it in a separate patch. - Mani -- மணிவண்ணன் சதாசிவம்
{ "author": "Manivannan Sadhasivam <mani@kernel.org>", "date": "Wed, 28 Jan 2026 19:33:57 +0530", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 23, 2026 at 04:42:54PM -0800, Mukesh R wrote: Such debug print do not exist now. So there is no need of a variable, drop it. - Mani -- மணிவண்ணன் சதாசிவம்
{ "author": "Manivannan Sadhasivam <mani@kernel.org>", "date": "Wed, 28 Jan 2026 20:06:21 +0530", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/27/26 14:31, Jacob Pan wrote: It would, but it is kernel parameter and needs to be terse. It would be documented properly tho, so we should be ok. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 30 Jan 2026 14:10:57 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/27/26 10:57, Stanislav Kinsburskii wrote: There are no ram pages for mmio regions. Also, we don't do much with mmio regions other than tell the hyp about it. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 30 Jan 2026 14:17:24 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/27/26 10:46, Stanislav Kinsburskii wrote: Again, most of linux stuff is cleaned up, the only state is in hypervisor, and hypervisor can totally protect itself and devices. So there is not much in kernel core as it got cleaned up already. Think of this as additional check, we can remove in future after it stands the test of time, until then, every debugging bit helps. Wei can correct me, but we are not only l1vh focused here. There is work going on on all fronts. Thanks, -Mukesh
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 30 Jan 2026 14:51:19 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On 1/27/26 11:21, Jacob Pan wrote: It's x86 specific: x86_init.iommu.iommu_init = hv_iommu_init oh, you are talking about the config option, yeah, I will move CONFIG_IRQ_REMAP from Kconfig to here. it could, but by convention all public funcs here are hv_iommu_xxx.. and other reviewers might object... We really need virt/mshv/ sub directory... not sure if it's worth creating now for just one function. so maybe we just live with it for now... we do have work item to move some things from drivers/hv to virt/mshv/ .. so this can get added to that whenever that happens. That is correct. It is extra work to support that if there is a good usecase/demand. Well, at the time of this design/coding, my understanding was we'd have mapped devices on l1vh also. But now it looks like that would be bit later than sooner .. unless AI bots start dumping code of course :) :).. I could remove it from the if statement and add it when the support is added, but is harmless and one less thing to remember. Device passthru will fail and will not cause any corruption or data theft issues... can make it u64 if it gives extra peace. not worth all that mumbo jumbo for almost never gonna happen case. no it's just a sequential number with no track of what's used. https://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux.git branch: hyperv-next All our hyperv/mshv related patch submissions are merged there first by Wei. Thanks, -Mukesh .. deleted ......
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Fri, 30 Jan 2026 15:44:37 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 30, 2026 at 02:51:19PM -0800, Mukesh R wrote: Again, the hypervisor state is not accessible from the kernel core in L1VH. In upstream, it does not matter what the work is focused on. The only thing that matters is that the functionality is merged and available. Once the code is merged upstream, it becomes available to third-party customers. They can use it in any way they see fit. The only product we support with upstream code is L1VH. We should keep it crisp, clear, and easy to debug. This change does not help debugging. It only sweeps the issue under the carpet, with no justification. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 2 Feb 2026 08:20:37 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 30, 2026 at 02:17:24PM -0800, Mukesh R wrote: So, are you saying that the hypervisor does not use these pages and only tracks them? That would make things easier. However, if we later try to map a GPA that is already mapped, will the hypervisor return an error? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 2 Feb 2026 08:30:49 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Modify online_memory_block() to accept the online type through its arg parameter rather than calling mhp_get_default_online_type() internally. This prepares for allowing callers to specify explicit online types. Update the caller in add_memory_resource() to pass the default online type via a local variable. No functional change. Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- mm/memory_hotplug.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index bc805029da51..87796b617d9e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1337,7 +1337,9 @@ static int check_hotplug_memory_range(u64 start, u64 size) static int online_memory_block(struct memory_block *mem, void *arg) { - mem->online_type = mhp_get_default_online_type(); + int *online_type = arg; + + mem->online_type = *online_type; return device_online(&mem->dev); } @@ -1578,8 +1580,12 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) merge_system_ram_resource(res); /* online pages if requested */ - if (mhp_get_default_online_type() != MMOP_OFFLINE) - walk_memory_blocks(start, size, NULL, online_memory_block); + if (mhp_get_default_online_type() != MMOP_OFFLINE) { + int online_type = mhp_get_default_online_type(); + + walk_memory_blocks(start, size, &online_type, + online_memory_block); + } return ret; error: -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:34 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Enable dax kmem driver to select how to online the memory rather than implicitly depending on the system default. This will allow users of dax to plumb through a preferred auto-online policy for their region. Refactor and new interface: Add __add_memory_driver_managed() which accepts an explicit online_type and export mhp_get_default_online_type() so callers can pass it when they want the default behavior. Refactor: Extract __add_memory_resource() to take an explicit online_type parameter, and update add_memory_resource() to pass the system default. No functional change for existing users. Cc: David Hildenbrand <david@kernel.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- include/linux/memory_hotplug.h | 3 ++ mm/memory_hotplug.c | 91 ++++++++++++++++++++++++---------- 2 files changed, 67 insertions(+), 27 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index f2f16cdd73ee..1eb63d1a247d 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -293,6 +293,9 @@ extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); extern int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); extern int add_memory_resource(int nid, struct resource *resource, mhp_t mhp_flags); +int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags, + int online_type); extern int add_memory_driver_managed(int nid, u64 start, u64 size, const char *resource_name, mhp_t mhp_flags); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 87796b617d9e..d3ca95b872bd 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -239,6 +239,7 @@ int mhp_get_default_online_type(void) return mhp_default_online_type; } +EXPORT_SYMBOL_GPL(mhp_get_default_online_type); void mhp_set_default_online_type(int online_type) { @@ -1490,7 +1491,8 @@ static int create_altmaps_and_memory_blocks(int nid, struct memory_group *group, * * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */ -int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) +static int __add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags, + int online_type) { struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) }; enum memblock_flags memblock_flags = MEMBLOCK_NONE; @@ -1580,12 +1582,9 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) merge_system_ram_resource(res); /* online pages if requested */ - if (mhp_get_default_online_type() != MMOP_OFFLINE) { - int online_type = mhp_get_default_online_type(); - + if (online_type != MMOP_OFFLINE) walk_memory_blocks(start, size, &online_type, online_memory_block); - } return ret; error: @@ -1601,7 +1600,13 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) return ret; } -/* requires device_hotplug_lock, see add_memory_resource() */ +int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) +{ + return __add_memory_resource(nid, res, mhp_flags, + mhp_get_default_online_type()); +} + +/* requires device_hotplug_lock, see __add_memory_resource() */ int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags) { struct resource *res; @@ -1629,29 +1634,24 @@ int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags) } EXPORT_SYMBOL_GPL(add_memory); -/* - * Add special, driver-managed memory to the system as system RAM. Such - * memory is not exposed via the raw firmware-provided memmap as system - * RAM, instead, it is detected and added by a driver - during cold boot, - * after a reboot, and after kexec. - * - * Reasons why this memory should not be used for the initial memmap of a - * kexec kernel or for placing kexec images: - * - The booting kernel is in charge of determining how this memory will be - * used (e.g., use persistent memory as system RAM) - * - Coordination with a hypervisor is required before this memory - * can be used (e.g., inaccessible parts). +/** + * __add_memory_driver_managed - add driver-managed memory with explicit online_type + * @nid: NUMA node ID where the memory will be added + * @start: Start physical address of the memory range + * @size: Size of the memory range in bytes + * @resource_name: Resource name in format "System RAM ($DRIVER)" + * @mhp_flags: Memory hotplug flags + * @online_type: Online behavior (MMOP_ONLINE, MMOP_ONLINE_KERNEL, + * MMOP_ONLINE_MOVABLE, or MMOP_OFFLINE) * - * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided - * memory map") are created. Also, the created memory resource is flagged - * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case - * this memory as well (esp., not place kexec images onto it). + * Add driver-managed memory with explicit online_type specification. + * The resource_name must have the format "System RAM ($DRIVER)". * - * The resource_name (visible via /proc/iomem) has to have the format - * "System RAM ($DRIVER)". + * Return: 0 on success, negative error code on failure. */ -int add_memory_driver_managed(int nid, u64 start, u64 size, - const char *resource_name, mhp_t mhp_flags) +int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags, + int online_type) { struct resource *res; int rc; @@ -1661,6 +1661,9 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, resource_name[strlen(resource_name) - 1] != ')') return -EINVAL; + if (online_type < 0 || online_type > MMOP_ONLINE_MOVABLE) + return -EINVAL; + lock_device_hotplug(); res = register_memory_resource(start, size, resource_name); @@ -1669,7 +1672,7 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, goto out_unlock; } - rc = add_memory_resource(nid, res, mhp_flags); + rc = __add_memory_resource(nid, res, mhp_flags, online_type); if (rc < 0) release_memory_resource(res); @@ -1677,6 +1680,40 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, unlock_device_hotplug(); return rc; } +EXPORT_SYMBOL_FOR_MODULES(__add_memory_driver_managed, "kmem"); + +/* + * Add special, driver-managed memory to the system as system RAM. Such + * memory is not exposed via the raw firmware-provided memmap as system + * RAM, instead, it is detected and added by a driver - during cold boot, + * after a reboot, and after kexec. + * + * Reasons why this memory should not be used for the initial memmap of a + * kexec kernel or for placing kexec images: + * - The booting kernel is in charge of determining how this memory will be + * used (e.g., use persistent memory as system RAM) + * - Coordination with a hypervisor is required before this memory + * can be used (e.g., inaccessible parts). + * + * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided + * memory map") are created. Also, the created memory resource is flagged + * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case + * this memory as well (esp., not place kexec images onto it). + * + * The resource_name (visible via /proc/iomem) has to have the format + * "System RAM ($DRIVER)". + * + * Memory will be onlined using the system default online type. + * + * Returns 0 on success, negative error code on failure. + */ +int add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags) +{ + return __add_memory_driver_managed(nid, start, size, resource_name, + mhp_flags, + mhp_get_default_online_type()); +} EXPORT_SYMBOL_GPL(add_memory_driver_managed); /* -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:35 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
There is no way for drivers leveraging dax_kmem to plumb through a preferred auto-online policy - the system default policy is forced. Add online_type field to DAX device creation path to allow drivers to specify an auto-online policy when using the kmem driver. Current callers initialize online_type to mhp_get_default_online_type() which resolves to the system default (memhp_default_online_type). No functional change to existing drivers. Cc:David Hildenbrand <david@kernel.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/region.c | 2 ++ drivers/cxl/cxl.h | 1 + drivers/dax/bus.c | 3 +++ drivers/dax/bus.h | 1 + drivers/dax/cxl.c | 1 + drivers/dax/dax-private.h | 2 ++ drivers/dax/hmem/hmem.c | 2 ++ drivers/dax/kmem.c | 13 +++++++++++-- drivers/dax/pmem.c | 2 ++ 9 files changed, 25 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 5bd1213737fa..eef5d5fe3f95 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2022 Intel Corporation. All rights reserved. */ #include <linux/memregion.h> +#include <linux/memory_hotplug.h> #include <linux/genalloc.h> #include <linux/debugfs.h> #include <linux/device.h> @@ -3459,6 +3460,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr) if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); + cxlr_dax->online_type = mhp_get_default_online_type(); dev = &cxlr_dax->dev; rc = dev_set_name(dev, "dax_region%d", cxlr->id); if (rc) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ba17fa86d249..07d57d13f4c7 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -591,6 +591,7 @@ struct cxl_dax_region { struct device dev; struct cxl_region *cxlr; struct range hpa_range; + int online_type; /* MMOP_ value for kmem driver */ }; /** diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c index fde29e0ad68b..121a6dd0afe7 100644 --- a/drivers/dax/bus.c +++ b/drivers/dax/bus.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2017-2018 Intel Corporation. All rights reserved. */ #include <linux/memremap.h> +#include <linux/memory_hotplug.h> #include <linux/device.h> #include <linux/mutex.h> #include <linux/list.h> @@ -395,6 +396,7 @@ static ssize_t create_store(struct device *dev, struct device_attribute *attr, .size = 0, .id = -1, .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; struct dev_dax *dev_dax = __devm_create_dev_dax(&data); @@ -1494,6 +1496,7 @@ static struct dev_dax *__devm_create_dev_dax(struct dev_dax_data *data) ida_init(&dev_dax->ida); dev_dax->memmap_on_memory = data->memmap_on_memory; + dev_dax->online_type = data->online_type; inode = dax_inode(dax_dev); dev->devt = inode->i_rdev; diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index cbbf64443098..4ac92a4edfe7 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -24,6 +24,7 @@ struct dev_dax_data { resource_size_t size; int id; bool memmap_on_memory; + int online_type; /* MMOP_ value for kmem driver */ }; struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data); diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 13cd94d32ff7..856a0cd24f3b 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -27,6 +27,7 @@ static int cxl_dax_region_probe(struct device *dev) .id = -1, .size = range_len(&cxlr_dax->hpa_range), .memmap_on_memory = true, + .online_type = cxlr_dax->online_type, }; return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data)); diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index c6ae27c982f4..9559718cc988 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -77,6 +77,7 @@ struct dev_dax_range { * @dev: device core * @pgmap: pgmap for memmap setup / lifetime (driver owned) * @memmap_on_memory: allow kmem to put the memmap in the memory + * @online_type: MMOP_* online type for memory hotplug * @nr_range: size of @ranges * @ranges: range tuples of memory used */ @@ -91,6 +92,7 @@ struct dev_dax { struct device dev; struct dev_pagemap *pgmap; bool memmap_on_memory; + int online_type; int nr_range; struct dev_dax_range *ranges; }; diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c index c18451a37e4f..119914b08fd9 100644 --- a/drivers/dax/hmem/hmem.c +++ b/drivers/dax/hmem/hmem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/platform_device.h> +#include <linux/memory_hotplug.h> #include <linux/memregion.h> #include <linux/module.h> #include <linux/dax.h> @@ -36,6 +37,7 @@ static int dax_hmem_probe(struct platform_device *pdev) .id = -1, .size = region_idle ? 0 : range_len(&mri->range), .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data)); diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index c036e4d0b610..550dc605229e 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -16,6 +16,11 @@ #include "dax-private.h" #include "bus.h" +/* Internal function exported only to kmem module */ +extern int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, + mhp_t mhp_flags, int online_type); + /* * Default abstract distance assigned to the NUMA node onlined * by DAX/kmem if the low level platform driver didn't initialize @@ -72,6 +77,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) struct dax_kmem_data *data; struct memory_dev_type *mtype; int i, rc, mapped = 0; + int online_type; mhp_t mhp_flags; int numa_node; int adist = MEMTIER_DEFAULT_DAX_ADISTANCE; @@ -134,6 +140,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) goto err_reg_mgid; data->mgid = rc; + online_type = dev_dax->online_type; + for (i = 0; i < dev_dax->nr_range; i++) { struct resource *res; struct range range; @@ -174,8 +182,9 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) * Ensure that future kexec'd kernels will not treat * this as RAM automatically. */ - rc = add_memory_driver_managed(data->mgid, range.start, - range_len(&range), kmem_name, mhp_flags); + rc = __add_memory_driver_managed(data->mgid, range.start, + range_len(&range), kmem_name, mhp_flags, + online_type); if (rc) { dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n", diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index bee93066a849..a5925146b09f 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ +#include <linux/memory_hotplug.h> #include <linux/memremap.h> #include <linux/module.h> #include "../nvdimm/pfn.h" @@ -63,6 +64,7 @@ static struct dev_dax *__dax_pmem_probe(struct device *dev) .pgmap = &pgmap, .size = range_len(&range), .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; return devm_create_dev_dax(&data); -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:36 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Move the pmem region driver logic from region.c into pmem_region.c. No functional changes. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/pmem_region.c | 191 +++++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 184 ------------------------------- 4 files changed, 193 insertions(+), 184 deletions(-) create mode 100644 drivers/cxl/core/pmem_region.c diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 5ad8fef210b5..23269c81fd44 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -17,6 +17,7 @@ cxl_core-y += cdat.o cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o cxl_core-$(CONFIG_CXL_EDAC_MEM_FEATURES) += edac.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index dd987ef2def5..26991de12d76 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); +int devm_cxl_add_pmem_region(struct cxl_region *cxlr); #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, diff --git a/drivers/cxl/core/pmem_region.c b/drivers/cxl/core/pmem_region.c new file mode 100644 index 000000000000..81b66e548bb5 --- /dev/null +++ b/drivers/cxl/core/pmem_region.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. All rights reserved. */ +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_pmem_region_release(struct device *dev) +{ + struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev); + int i; + + for (i = 0; i < cxlr_pmem->nr_mappings; i++) { + struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd; + + put_device(&cxlmd->dev); + } + + kfree(cxlr_pmem); +} + +static const struct attribute_group *cxl_pmem_region_attribute_groups[] = { + &cxl_base_attribute_group, + NULL, +}; + +const struct device_type cxl_pmem_region_type = { + .name = "cxl_pmem_region", + .release = cxl_pmem_region_release, + .groups = cxl_pmem_region_attribute_groups, +}; +bool is_cxl_pmem_region(struct device *dev) +{ + return dev->type == &cxl_pmem_region_type; +} +EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL"); + +struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev), + "not a cxl_pmem_region device\n")) + return NULL; + return container_of(dev, struct cxl_pmem_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL"); +static struct lock_class_key cxl_pmem_region_key; + +static int cxl_pmem_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_nvdimm_bridge *cxl_nvb; + struct device *dev; + int i; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return -ENXIO; + + struct cxl_pmem_region *cxlr_pmem __free(kfree) = + kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL); + if (!cxlr_pmem) + return -ENOMEM; + + cxlr_pmem->hpa_range.start = p->res->start; + cxlr_pmem->hpa_range.end = p->res->end; + + /* Snapshot the region configuration underneath the cxl_rwsem.region */ + cxlr_pmem->nr_mappings = p->nr_targets; + for (i = 0; i < p->nr_targets; i++) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); + struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i]; + + /* + * Regions never span CXL root devices, so by definition the + * bridge for one device is the same for all. + */ + if (i == 0) { + cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint); + if (!cxl_nvb) + return -ENODEV; + cxlr->cxl_nvb = cxl_nvb; + } + m->cxlmd = cxlmd; + get_device(&cxlmd->dev); + m->start = cxled->dpa_res->start; + m->size = resource_size(cxled->dpa_res); + m->position = i; + } + + dev = &cxlr_pmem->dev; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_pmem_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_pmem_region_type; + cxlr_pmem->cxlr = cxlr; + cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem); + + return 0; +} + +static void cxlr_pmem_unregister(void *_cxlr_pmem) +{ + struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem; + struct cxl_region *cxlr = cxlr_pmem->cxlr; + struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; + + /* + * Either the bridge is in ->remove() context under the device_lock(), + * or cxlr_release_nvdimm() is cancelling the bridge's release action + * for @cxlr_pmem and doing it itself (while manually holding the bridge + * lock). + */ + device_lock_assert(&cxl_nvb->dev); + cxlr->cxlr_pmem = NULL; + cxlr_pmem->cxlr = NULL; + device_unregister(&cxlr_pmem->dev); +} + +static void cxlr_release_nvdimm(void *_cxlr) +{ + struct cxl_region *cxlr = _cxlr; + struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; + + scoped_guard(device, &cxl_nvb->dev) { + if (cxlr->cxlr_pmem) + devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, + cxlr->cxlr_pmem); + } + cxlr->cxl_nvb = NULL; + put_device(&cxl_nvb->dev); +} + +/** + * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge + * @cxlr: parent CXL region for this pmem region bridge device + * + * Return: 0 on success negative error code on failure. + */ +int devm_cxl_add_pmem_region(struct cxl_region *cxlr) +{ + struct cxl_pmem_region *cxlr_pmem; + struct cxl_nvdimm_bridge *cxl_nvb; + struct device *dev; + int rc; + + rc = cxl_pmem_region_alloc(cxlr); + if (rc) + return rc; + cxlr_pmem = cxlr->cxlr_pmem; + cxl_nvb = cxlr->cxl_nvb; + + dev = &cxlr_pmem->dev; + rc = dev_set_name(dev, "pmem_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + scoped_guard(device, &cxl_nvb->dev) { + if (cxl_nvb->dev.driver) + rc = devm_add_action_or_reset(&cxl_nvb->dev, + cxlr_pmem_unregister, + cxlr_pmem); + else + rc = -ENXIO; + } + + if (rc) + goto err_bridge; + + /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */ + return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr); + +err: + put_device(dev); +err_bridge: + put_device(&cxl_nvb->dev); + cxlr->cxl_nvb = NULL; + return rc; +} + + diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index e4097c464ed3..fc56f8f03805 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2747,46 +2747,6 @@ static ssize_t delete_region_store(struct device *dev, } DEVICE_ATTR_WO(delete_region); -static void cxl_pmem_region_release(struct device *dev) -{ - struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev); - int i; - - for (i = 0; i < cxlr_pmem->nr_mappings; i++) { - struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd; - - put_device(&cxlmd->dev); - } - - kfree(cxlr_pmem); -} - -static const struct attribute_group *cxl_pmem_region_attribute_groups[] = { - &cxl_base_attribute_group, - NULL, -}; - -const struct device_type cxl_pmem_region_type = { - .name = "cxl_pmem_region", - .release = cxl_pmem_region_release, - .groups = cxl_pmem_region_attribute_groups, -}; - -bool is_cxl_pmem_region(struct device *dev) -{ - return dev->type == &cxl_pmem_region_type; -} -EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL"); - -struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev) -{ - if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev), - "not a cxl_pmem_region device\n")) - return NULL; - return container_of(dev, struct cxl_pmem_region, dev); -} -EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL"); - struct cxl_poison_context { struct cxl_port *port; int part; @@ -3236,64 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, return -ENXIO; } -static struct lock_class_key cxl_pmem_region_key; - -static int cxl_pmem_region_alloc(struct cxl_region *cxlr) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_nvdimm_bridge *cxl_nvb; - struct device *dev; - int i; - - guard(rwsem_read)(&cxl_rwsem.region); - if (p->state != CXL_CONFIG_COMMIT) - return -ENXIO; - - struct cxl_pmem_region *cxlr_pmem __free(kfree) = - kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL); - if (!cxlr_pmem) - return -ENOMEM; - - cxlr_pmem->hpa_range.start = p->res->start; - cxlr_pmem->hpa_range.end = p->res->end; - - /* Snapshot the region configuration underneath the cxl_rwsem.region */ - cxlr_pmem->nr_mappings = p->nr_targets; - for (i = 0; i < p->nr_targets; i++) { - struct cxl_endpoint_decoder *cxled = p->targets[i]; - struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); - struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i]; - - /* - * Regions never span CXL root devices, so by definition the - * bridge for one device is the same for all. - */ - if (i == 0) { - cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint); - if (!cxl_nvb) - return -ENODEV; - cxlr->cxl_nvb = cxl_nvb; - } - m->cxlmd = cxlmd; - get_device(&cxlmd->dev); - m->start = cxled->dpa_res->start; - m->size = resource_size(cxled->dpa_res); - m->position = i; - } - - dev = &cxlr_pmem->dev; - device_initialize(dev); - lockdep_set_class(&dev->mutex, &cxl_pmem_region_key); - device_set_pm_not_required(dev); - dev->parent = &cxlr->dev; - dev->bus = &cxl_bus_type; - dev->type = &cxl_pmem_region_type; - cxlr_pmem->cxlr = cxlr; - cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem); - - return 0; -} - static void cxl_dax_region_release(struct device *dev) { struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); @@ -3357,92 +3259,6 @@ static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) return cxlr_dax; } -static void cxlr_pmem_unregister(void *_cxlr_pmem) -{ - struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem; - struct cxl_region *cxlr = cxlr_pmem->cxlr; - struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; - - /* - * Either the bridge is in ->remove() context under the device_lock(), - * or cxlr_release_nvdimm() is cancelling the bridge's release action - * for @cxlr_pmem and doing it itself (while manually holding the bridge - * lock). - */ - device_lock_assert(&cxl_nvb->dev); - cxlr->cxlr_pmem = NULL; - cxlr_pmem->cxlr = NULL; - device_unregister(&cxlr_pmem->dev); -} - -static void cxlr_release_nvdimm(void *_cxlr) -{ - struct cxl_region *cxlr = _cxlr; - struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; - - scoped_guard(device, &cxl_nvb->dev) { - if (cxlr->cxlr_pmem) - devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, - cxlr->cxlr_pmem); - } - cxlr->cxl_nvb = NULL; - put_device(&cxl_nvb->dev); -} - -/** - * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge - * @cxlr: parent CXL region for this pmem region bridge device - * - * Return: 0 on success negative error code on failure. - */ -static int devm_cxl_add_pmem_region(struct cxl_region *cxlr) -{ - struct cxl_pmem_region *cxlr_pmem; - struct cxl_nvdimm_bridge *cxl_nvb; - struct device *dev; - int rc; - - rc = cxl_pmem_region_alloc(cxlr); - if (rc) - return rc; - cxlr_pmem = cxlr->cxlr_pmem; - cxl_nvb = cxlr->cxl_nvb; - - dev = &cxlr_pmem->dev; - rc = dev_set_name(dev, "pmem_region%d", cxlr->id); - if (rc) - goto err; - - rc = device_add(dev); - if (rc) - goto err; - - dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), - dev_name(dev)); - - scoped_guard(device, &cxl_nvb->dev) { - if (cxl_nvb->dev.driver) - rc = devm_add_action_or_reset(&cxl_nvb->dev, - cxlr_pmem_unregister, - cxlr_pmem); - else - rc = -ENXIO; - } - - if (rc) - goto err_bridge; - - /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */ - return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr); - -err: - put_device(dev); -err_bridge: - put_device(&cxl_nvb->dev); - cxlr->cxl_nvb = NULL; - return rc; -} - static void cxlr_dax_unregister(void *_cxlr_dax) { struct cxl_dax_region *cxlr_dax = _cxlr_dax; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:38 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Move the CXL DAX region device infrastructure from region.c into a new dax_region.c file. No functional changes. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/dax_region.c | 113 ++++++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 102 ------------------------------ 4 files changed, 115 insertions(+), 102 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 23269c81fd44..36f284d7c500 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -17,6 +17,7 @@ cxl_core-y += cdat.o cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_REGION) += dax_region.o cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 26991de12d76..217dd708a2a6 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); +int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); #else diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c new file mode 100644 index 000000000000..0602db5f7248 --- /dev/null +++ b/drivers/cxl/core/dax_region.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright(c) 2022 Intel Corporation. All rights reserved. + * Copyright(c) 2026 Meta Technologies Inc. All rights reserved. + */ +#include <linux/memory_hotplug.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_dax_region_release(struct device *dev) +{ + struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); + + kfree(cxlr_dax); +} + +static const struct attribute_group *cxl_dax_region_attribute_groups[] = { + &cxl_base_attribute_group, + NULL, +}; + +const struct device_type cxl_dax_region_type = { + .name = "cxl_dax_region", + .release = cxl_dax_region_release, + .groups = cxl_dax_region_attribute_groups, +}; + +static bool is_cxl_dax_region(struct device *dev) +{ + return dev->type == &cxl_dax_region_type; +} + +struct cxl_dax_region *to_cxl_dax_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev), + "not a cxl_dax_region device\n")) + return NULL; + return container_of(dev, struct cxl_dax_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL"); + +static struct lock_class_key cxl_dax_region_key; + +static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_dax_region *cxlr_dax; + struct device *dev; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return ERR_PTR(-ENXIO); + + cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL); + if (!cxlr_dax) + return ERR_PTR(-ENOMEM); + + cxlr_dax->hpa_range.start = p->res->start; + cxlr_dax->hpa_range.end = p->res->end; + + dev = &cxlr_dax->dev; + cxlr_dax->cxlr = cxlr; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_dax_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_dax_region_type; + + return cxlr_dax; +} + +static void cxlr_dax_unregister(void *_cxlr_dax) +{ + struct cxl_dax_region *cxlr_dax = _cxlr_dax; + + device_unregister(&cxlr_dax->dev); +} + +int devm_cxl_add_dax_region(struct cxl_region *cxlr, + enum dax_driver_type dax_driver) +{ + struct cxl_dax_region *cxlr_dax; + struct device *dev; + int rc; + + cxlr_dax = cxl_dax_region_alloc(cxlr); + if (IS_ERR(cxlr_dax)) + return PTR_ERR(cxlr_dax); + + cxlr_dax->online_type = mhp_get_default_online_type(); + cxlr_dax->dax_driver = dax_driver; + dev = &cxlr_dax->dev; + rc = dev_set_name(dev, "dax_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, + cxlr_dax); +err: + put_device(dev); + return rc; +} diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index fc56f8f03805..61ec939c1462 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3196,108 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, return -ENXIO; } -static void cxl_dax_region_release(struct device *dev) -{ - struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); - - kfree(cxlr_dax); -} - -static const struct attribute_group *cxl_dax_region_attribute_groups[] = { - &cxl_base_attribute_group, - NULL, -}; - -const struct device_type cxl_dax_region_type = { - .name = "cxl_dax_region", - .release = cxl_dax_region_release, - .groups = cxl_dax_region_attribute_groups, -}; - -static bool is_cxl_dax_region(struct device *dev) -{ - return dev->type == &cxl_dax_region_type; -} - -struct cxl_dax_region *to_cxl_dax_region(struct device *dev) -{ - if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev), - "not a cxl_dax_region device\n")) - return NULL; - return container_of(dev, struct cxl_dax_region, dev); -} -EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL"); - -static struct lock_class_key cxl_dax_region_key; - -static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_dax_region *cxlr_dax; - struct device *dev; - - guard(rwsem_read)(&cxl_rwsem.region); - if (p->state != CXL_CONFIG_COMMIT) - return ERR_PTR(-ENXIO); - - cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL); - if (!cxlr_dax) - return ERR_PTR(-ENOMEM); - - cxlr_dax->hpa_range.start = p->res->start; - cxlr_dax->hpa_range.end = p->res->end; - - dev = &cxlr_dax->dev; - cxlr_dax->cxlr = cxlr; - device_initialize(dev); - lockdep_set_class(&dev->mutex, &cxl_dax_region_key); - device_set_pm_not_required(dev); - dev->parent = &cxlr->dev; - dev->bus = &cxl_bus_type; - dev->type = &cxl_dax_region_type; - - return cxlr_dax; -} - -static void cxlr_dax_unregister(void *_cxlr_dax) -{ - struct cxl_dax_region *cxlr_dax = _cxlr_dax; - - device_unregister(&cxlr_dax->dev); -} - -static int devm_cxl_add_dax_region(struct cxl_region *cxlr, - enum dax_driver_type dax_driver) -{ - struct cxl_dax_region *cxlr_dax; - struct device *dev; - int rc; - - cxlr_dax = cxl_dax_region_alloc(cxlr); - if (IS_ERR(cxlr_dax)) - return PTR_ERR(cxlr_dax); - - cxlr_dax->online_type = mhp_get_default_online_type(); - cxlr_dax->dax_driver = dax_driver; - dev = &cxlr_dax->dev; - rc = dev_set_name(dev, "dax_region%d", cxlr->id); - if (rc) - goto err; - - rc = device_add(dev); - if (rc) - goto err; - - dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), - dev_name(dev)); - - return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, - cxlr_dax); -err: - put_device(dev); - return rc; -} - static int match_decoder_by_range(struct device *dev, const void *data) { const struct range *r1, *r2 = data; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:39 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Add a new cxl_devdax_region driver that probes CXL regions in device dax mode and creates dax_region devices. This allows explicit binding to the device_dax dax driver instead of the kmem driver. Exports to_cxl_region() to core.h so it can be used by the driver. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/core.h | 2 ++ drivers/cxl/core/dax_region.c | 16 ++++++++++++++++ drivers/cxl/core/region.c | 21 +++++++++++++++++---- drivers/cxl/cxl.h | 1 + 4 files changed, 36 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 217dd708a2a6..ea4df8abc2ad 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -46,6 +46,8 @@ u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); +extern struct cxl_driver cxl_devdax_region_driver; + #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa) diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c index 0602db5f7248..391d51e5ec37 100644 --- a/drivers/cxl/core/dax_region.c +++ b/drivers/cxl/core/dax_region.c @@ -111,3 +111,19 @@ int devm_cxl_add_dax_region(struct cxl_region *cxlr, put_device(dev); return rc; } + +static int cxl_devdax_region_driver_probe(struct device *dev) +{ + struct cxl_region *cxlr = to_cxl_region(dev); + + if (cxlr->mode != CXL_PARTMODE_RAM) + return -ENODEV; + + return devm_cxl_add_dax_region(cxlr, DAXDRV_DEVICE_TYPE); +} + +struct cxl_driver cxl_devdax_region_driver = { + .name = "cxl_devdax_region", + .probe = cxl_devdax_region_driver_probe, + .id = CXL_DEVICE_REGION, +}; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 61ec939c1462..6200ca1cc2dd 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -39,8 +39,6 @@ */ static nodemask_t nodemask_region_seen = NODE_MASK_NONE; -static struct cxl_region *to_cxl_region(struct device *dev); - #define __ACCESS_ATTR_RO(_level, _name) { \ .attr = { .name = __stringify(_name), .mode = 0444 }, \ .show = _name##_access##_level##_show, \ @@ -2430,7 +2428,7 @@ bool is_cxl_region(struct device *dev) } EXPORT_SYMBOL_NS_GPL(is_cxl_region, "CXL"); -static struct cxl_region *to_cxl_region(struct device *dev) +struct cxl_region *to_cxl_region(struct device *dev) { if (dev_WARN_ONCE(dev, dev->type != &cxl_region_type, "not a cxl_region device\n")) @@ -3726,11 +3724,26 @@ static struct cxl_driver cxl_region_driver = { int cxl_region_init(void) { - return cxl_driver_register(&cxl_region_driver); + int rc; + + rc = cxl_driver_register(&cxl_region_driver); + if (rc) + return rc; + + rc = cxl_driver_register(&cxl_devdax_region_driver); + if (rc) + goto err_dax; + + return 0; + +err_dax: + cxl_driver_unregister(&cxl_region_driver); + return rc; } void cxl_region_exit(void) { + cxl_driver_unregister(&cxl_devdax_region_driver); cxl_driver_unregister(&cxl_region_driver); } diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index c06a239c0008..674d5f870c70 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -859,6 +859,7 @@ int cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds, struct cxl_endpoint_dvsec_info *info); bool is_cxl_region(struct device *dev); +struct cxl_region *to_cxl_region(struct device *dev); extern const struct bus_type cxl_bus_type; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:40 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
CXL regions may wish not to auto-configure their memory as dax kmem, but the current plumbing defaults all cxl-created dax devices to the kmem driver. This exposes them to hotplug policy, even if the user intends to use the memory as a dax device. Add plumbing to allow CXL drivers to select whether a DAX region should default to kmem (DAXDRV_KMEM_TYPE) or device (DAXDRV_DEVICE_TYPE). Add a 'dax_driver' field to struct cxl_dax_region and update devm_cxl_add_dax_region() to take a dax_driver_type parameter. In drivers/dax/cxl.c, the IORESOURCE_DAX_KMEM flag used by dax driver matching code is now set conditionally based on dax_region->dax_driver. Exports `enum dax_driver_type` to linux/dax.h for use in the cxl driver. All current callers pass DAXDRV_KMEM_TYPE for backward compatibility. Cc: John Groves <john@jagalactic.com> Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/core.h | 1 + drivers/cxl/core/region.c | 6 ++++-- drivers/cxl/cxl.h | 2 ++ drivers/dax/bus.h | 6 +----- drivers/dax/cxl.c | 6 +++++- include/linux/dax.h | 5 +++++ 6 files changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 1fb66132b777..dd987ef2def5 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -6,6 +6,7 @@ #include <cxl/mailbox.h> #include <linux/rwsem.h> +#include <linux/dax.h> extern const struct device_type cxl_nvdimm_bridge_type; extern const struct device_type cxl_nvdimm_type; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index eef5d5fe3f95..e4097c464ed3 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3450,7 +3450,8 @@ static void cxlr_dax_unregister(void *_cxlr_dax) device_unregister(&cxlr_dax->dev); } -static int devm_cxl_add_dax_region(struct cxl_region *cxlr) +static int devm_cxl_add_dax_region(struct cxl_region *cxlr, + enum dax_driver_type dax_driver) { struct cxl_dax_region *cxlr_dax; struct device *dev; @@ -3461,6 +3462,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr) return PTR_ERR(cxlr_dax); cxlr_dax->online_type = mhp_get_default_online_type(); + cxlr_dax->dax_driver = dax_driver; dev = &cxlr_dax->dev; rc = dev_set_name(dev, "dax_region%d", cxlr->id); if (rc) @@ -3994,7 +3996,7 @@ static int cxl_region_probe(struct device *dev) p->res->start, p->res->end, cxlr, is_system_ram) > 0) return 0; - return devm_cxl_add_dax_region(cxlr); + return devm_cxl_add_dax_region(cxlr, DAXDRV_KMEM_TYPE); default: dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", cxlr->mode); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 07d57d13f4c7..c06a239c0008 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -12,6 +12,7 @@ #include <linux/node.h> #include <linux/io.h> #include <linux/range.h> +#include <linux/dax.h> extern const struct nvdimm_security_ops *cxl_security_ops; @@ -592,6 +593,7 @@ struct cxl_dax_region { struct cxl_region *cxlr; struct range hpa_range; int online_type; /* MMOP_ value for kmem driver */ + enum dax_driver_type dax_driver; }; /** diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index 4ac92a4edfe7..9144593b4029 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -2,6 +2,7 @@ /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ #ifndef __DAX_BUS_H__ #define __DAX_BUS_H__ +#include <linux/dax.h> #include <linux/device.h> #include <linux/range.h> @@ -29,11 +30,6 @@ struct dev_dax_data { struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data); -enum dax_driver_type { - DAXDRV_KMEM_TYPE, - DAXDRV_DEVICE_TYPE, -}; - struct dax_device_driver { struct device_driver drv; struct list_head ids; diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 856a0cd24f3b..b13ecc2f9806 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -11,14 +11,18 @@ static int cxl_dax_region_probe(struct device *dev) struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); int nid = phys_to_target_node(cxlr_dax->hpa_range.start); struct cxl_region *cxlr = cxlr_dax->cxlr; + unsigned long flags = 0; struct dax_region *dax_region; struct dev_dax_data data; + if (cxlr_dax->dax_driver == DAXDRV_KMEM_TYPE) + flags |= IORESOURCE_DAX_KMEM; + if (nid == NUMA_NO_NODE) nid = memory_add_physaddr_to_nid(cxlr_dax->hpa_range.start); dax_region = alloc_dax_region(dev, cxlr->id, &cxlr_dax->hpa_range, nid, - PMD_SIZE, IORESOURCE_DAX_KMEM); + PMD_SIZE, flags); if (!dax_region) return -ENOMEM; diff --git a/include/linux/dax.h b/include/linux/dax.h index bf103f317cac..e62f92d0ace1 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -19,6 +19,11 @@ enum dax_access_mode { DAX_RECOVERY_WRITE, }; +enum dax_driver_type { + DAXDRV_KMEM_TYPE, + DAXDRV_DEVICE_TYPE, +}; + struct dax_operations { /* * direct_access: translate a device-relative -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:37 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Explain the binding process for sysram and daxdev regions which are explicit about which dax driver to use during region creation. Jonathan Corbet <corbet@lwn.net> Signed-off-by: Gregory Price <gourry@gourry.net> --- .../driver-api/cxl/linux/cxl-driver.rst | 43 +++++++++++++++++++ .../driver-api/cxl/linux/dax-driver.rst | 29 +++++++++++++ 2 files changed, 72 insertions(+) diff --git a/Documentation/driver-api/cxl/linux/cxl-driver.rst b/Documentation/driver-api/cxl/linux/cxl-driver.rst index dd6dd17dc536..1f857345e896 100644 --- a/Documentation/driver-api/cxl/linux/cxl-driver.rst +++ b/Documentation/driver-api/cxl/linux/cxl-driver.rst @@ -445,6 +445,49 @@ for more details. :: dax0.0 devtype modalias uevent dax_region driver subsystem +DAX regions are created when a CXL RAM region is bound to one of the +following drivers: + +* :code:`cxl_devdax_region` - Creates a dax_region for device_dax mode. + The resulting DAX device provides direct userspace access via + :code:`/dev/daxN.Y`. + +* :code:`cxl_dax_kmem_region` - Creates a dax_region for kmem mode via a + sysram_region intermediate device. See `Sysram Region`_ below. + +Sysram Region +~~~~~~~~~~~~~ +A `Sysram Region` is an intermediate device between a CXL `Memory Region` +and a `DAX Region` for kmem mode. It is created when a CXL RAM region is +bound to the :code:`cxl_sysram_region` driver. + +The sysram_region device provides an interposition point where users can +configure memory hotplug policy before the underlying dax_region is created +and memory is hotplugged to the system. + +The device hierarchy for kmem mode is:: + + regionX -> sysram_regionX -> dax_regionX -> daxX.Y + +The sysram_region exposes an :code:`online_type` attribute that controls +how memory will be onlined when the dax_kmem driver binds: + +* :code:`invalid` - Not configured (default). Blocks driver binding. +* :code:`offline` - Memory will not be onlined automatically. +* :code:`online` - Memory will be onlined in ZONE_NORMAL. +* :code:`online_movable` - Memory will be onlined in ZONE_MOVABLE. + +Example two-stage binding process:: + + # Bind region to sysram_region driver + echo region0 > /sys/bus/cxl/drivers/cxl_sysram_region/bind + + # Configure memory online type + echo online_movable > /sys/bus/cxl/devices/sysram_region0/online_type + + # Bind sysram_region to dax_kmem_region driver + echo sysram_region0 > /sys/bus/cxl/drivers/cxl_dax_kmem_region/bind + Mailbox Interfaces ------------------ A mailbox command interface for each device is exposed in :: diff --git a/Documentation/driver-api/cxl/linux/dax-driver.rst b/Documentation/driver-api/cxl/linux/dax-driver.rst index 10d953a2167b..2b8e21736292 100644 --- a/Documentation/driver-api/cxl/linux/dax-driver.rst +++ b/Documentation/driver-api/cxl/linux/dax-driver.rst @@ -17,6 +17,35 @@ The DAX subsystem exposes this ability through the `cxl_dax_region` driver. A `dax_region` provides the translation between a CXL `memory_region` and a `DAX Device`. +CXL DAX Region Drivers +====================== +CXL provides multiple drivers for creating DAX regions, each suited for +different use cases: + +cxl_devdax_region +----------------- +The :code:`cxl_devdax_region` driver creates a dax_region configured for +device_dax mode. When a CXL RAM region is bound to this driver, the +resulting DAX device provides direct userspace access via :code:`/dev/daxN.Y`. + +Device hierarchy:: + + regionX -> dax_regionX -> daxX.Y + +This is the simplest path for applications that want to manage CXL memory +directly from userspace. + +cxl_dax_kmem_region +------------------- +For kmem mode, CXL provides a two-stage binding process that allows users +to configure memory hotplug policy before memory is added to the system. + +The :code:`cxl_dax_kmem_region` driver then binds a sysram_region +device and creates a dax_region configured for kmem mode. + +The :code:`online_type` policy will be passed from sysram_region to +the dax kmem driver for use when hotplugging the memory. + DAX Device ========== A `DAX Device` is a file-like interface exposed in :code:`/dev/daxN.Y`. A -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:42 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
In the current kmem driver binding process, the only way for users to define hotplug policy is via a build-time option, or by not onlining memory by default and setting each individual memory block online after hotplug occurs. We can solve this with a configuration step between region-probe and dax-probe. Add the infrastructure for a two-stage driver binding for kmem-mode dax regions. The cxl_dax_kmem_region driver probes cxl_sysram_region devices and creates cxl_dax_region with dax_driver=kmem. This creates an interposition step where users can configure policy. Device hierarchy: region0 -> sysram_region0 -> dax_region0 -> dax0.0 The sysram_region device exposes a sysfs 'online_type' attribute that allows users to configure the memory online type before the underlying dax_region is created and memory is hotplugged. sysram_region0/online_type: invalid: not configured, blocks probe offline: memory will not be onlined automatically online: memory will be onlined in ZONE_NORMAL online_movable: memory will be onlined in ZONE_MMOVABLE The device initializes with online_type=invalid which prevents the cxl_dax_kmem_region driver from binding until the user explicitly configures a valid online_type. This enables a two-step binding process: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind Signed-off-by: Gregory Price <gourry@gourry.net> --- Documentation/ABI/testing/sysfs-bus-cxl | 21 +++ drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 6 + drivers/cxl/core/dax_region.c | 50 +++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 14 ++ drivers/cxl/core/sysram_region.c | 180 ++++++++++++++++++++++++ drivers/cxl/cxl.h | 25 ++++ 8 files changed, 299 insertions(+) create mode 100644 drivers/cxl/core/sysram_region.c diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl index c80a1b5a03db..a051cb86bdfc 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -624,3 +624,24 @@ Description: The count is persistent across power loss and wraps back to 0 upon overflow. If this file is not present, the device does not have the necessary support for dirty tracking. + + +What: /sys/bus/cxl/devices/sysram_regionZ/online_type +Date: January, 2026 +KernelVersion: v7.1 +Contact: linux-cxl@vger.kernel.org +Description: + (RW) This attribute allows users to configure the memory online + type before the underlying dax_region engages in hotplug. + + Valid values: + 'invalid': Not configured (default). Blocks probe. + 'offline': Memory will not be onlined automatically. + 'online' : Memory will be onlined in ZONE_NORMAL. + 'online_movable': Memory will be onlined in ZONE_MOVABLE. + + The device initializes with online_type='invalid' which prevents + the cxl_dax_kmem_region driver from binding until the user + explicitly configures a valid online_type. This enables a + two-step binding process that gives users control over memory + hotplug policy before memory is added to the system. diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 36f284d7c500..faf662c7d88b 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -18,6 +18,7 @@ cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o cxl_core-$(CONFIG_CXL_REGION) += dax_region.o +cxl_core-$(CONFIG_CXL_REGION) += sysram_region.o cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index ea4df8abc2ad..04b32015e9b1 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -26,6 +26,7 @@ extern struct device_attribute dev_attr_delete_region; extern struct device_attribute dev_attr_region; extern const struct device_type cxl_pmem_region_type; extern const struct device_type cxl_dax_region_type; +extern const struct device_type cxl_sysram_region_type; extern const struct device_type cxl_region_type; int cxl_decoder_detach(struct cxl_region *cxlr, @@ -37,6 +38,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr, #define SET_CXL_REGION_ATTR(x) (&dev_attr_##x.attr), #define CXL_PMEM_REGION_TYPE(x) (&cxl_pmem_region_type) #define CXL_DAX_REGION_TYPE(x) (&cxl_dax_region_type) +#define CXL_SYSRAM_REGION_TYPE(x) (&cxl_sysram_region_type) int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); @@ -44,9 +46,12 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); +int devm_cxl_add_sysram_region(struct cxl_region *cxlr); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); extern struct cxl_driver cxl_devdax_region_driver; +extern struct cxl_driver cxl_dax_kmem_region_driver; +extern struct cxl_driver cxl_sysram_region_driver; #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, @@ -81,6 +86,7 @@ static inline void cxl_region_exit(void) #define SET_CXL_REGION_ATTR(x) #define CXL_PMEM_REGION_TYPE(x) NULL #define CXL_DAX_REGION_TYPE(x) NULL +#define CXL_SYSRAM_REGION_TYPE(x) NULL #endif struct cxl_send_command; diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c index 391d51e5ec37..a379f5b85e3d 100644 --- a/drivers/cxl/core/dax_region.c +++ b/drivers/cxl/core/dax_region.c @@ -127,3 +127,53 @@ struct cxl_driver cxl_devdax_region_driver = { .probe = cxl_devdax_region_driver_probe, .id = CXL_DEVICE_REGION, }; + +static int cxl_dax_kmem_region_driver_probe(struct device *dev) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + struct cxl_dax_region *cxlr_dax; + struct cxl_region *cxlr; + int rc; + + if (!cxlr_sysram) + return -ENODEV; + + /* Require explicit online_type configuration before binding */ + if (cxlr_sysram->online_type == -1) + return -ENODEV; + + cxlr = cxlr_sysram->cxlr; + + cxlr_dax = cxl_dax_region_alloc(cxlr); + if (IS_ERR(cxlr_dax)) + return PTR_ERR(cxlr_dax); + + /* Inherit online_type from parent sysram_region */ + cxlr_dax->online_type = cxlr_sysram->online_type; + cxlr_dax->dax_driver = DAXDRV_KMEM_TYPE; + + /* Parent is the sysram_region device */ + cxlr_dax->dev.parent = dev; + + rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(&cxlr_dax->dev); + if (rc) + goto err; + + dev_dbg(dev, "%s: register %s\n", dev_name(dev), + dev_name(&cxlr_dax->dev)); + + return devm_add_action_or_reset(dev, cxlr_dax_unregister, cxlr_dax); +err: + put_device(&cxlr_dax->dev); + return rc; +} + +struct cxl_driver cxl_dax_kmem_region_driver = { + .name = "cxl_dax_kmem_region", + .probe = cxl_dax_kmem_region_driver_probe, + .id = CXL_DEVICE_SYSRAM_REGION, +}; diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 3310dbfae9d6..dc7262a5efd6 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -66,6 +66,8 @@ static int cxl_device_id(const struct device *dev) return CXL_DEVICE_PMEM_REGION; if (dev->type == CXL_DAX_REGION_TYPE()) return CXL_DEVICE_DAX_REGION; + if (dev->type == CXL_SYSRAM_REGION_TYPE()) + return CXL_DEVICE_SYSRAM_REGION; if (is_cxl_port(dev)) { if (is_cxl_root(to_cxl_port(dev))) return CXL_DEVICE_ROOT; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 6200ca1cc2dd..8bef91dc726c 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3734,8 +3734,20 @@ int cxl_region_init(void) if (rc) goto err_dax; + rc = cxl_driver_register(&cxl_sysram_region_driver); + if (rc) + goto err_sysram; + + rc = cxl_driver_register(&cxl_dax_kmem_region_driver); + if (rc) + goto err_dax_kmem; + return 0; +err_dax_kmem: + cxl_driver_unregister(&cxl_sysram_region_driver); +err_sysram: + cxl_driver_unregister(&cxl_devdax_region_driver); err_dax: cxl_driver_unregister(&cxl_region_driver); return rc; @@ -3743,6 +3755,8 @@ int cxl_region_init(void) void cxl_region_exit(void) { + cxl_driver_unregister(&cxl_dax_kmem_region_driver); + cxl_driver_unregister(&cxl_sysram_region_driver); cxl_driver_unregister(&cxl_devdax_region_driver); cxl_driver_unregister(&cxl_region_driver); } diff --git a/drivers/cxl/core/sysram_region.c b/drivers/cxl/core/sysram_region.c new file mode 100644 index 000000000000..5665db238d0f --- /dev/null +++ b/drivers/cxl/core/sysram_region.c @@ -0,0 +1,180 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2026 Meta Platforms, Inc. All rights reserved. */ +/* + * CXL Sysram Region - Intermediate device for kmem hotplug configuration + * + * This provides an intermediate device between cxl_region and cxl_dax_region + * that allows users to configure memory hotplug parameters (like online_type) + * before the underlying dax_region is created and memory is hotplugged. + */ + +#include <linux/memory_hotplug.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_sysram_region_release(struct device *dev) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + kfree(cxlr_sysram); +} + +static ssize_t online_type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + switch (cxlr_sysram->online_type) { + case MMOP_OFFLINE: + return sysfs_emit(buf, "offline\n"); + case MMOP_ONLINE: + return sysfs_emit(buf, "online\n"); + case MMOP_ONLINE_MOVABLE: + return sysfs_emit(buf, "online_movable\n"); + default: + return sysfs_emit(buf, "invalid\n"); + } +} + +static ssize_t online_type_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + if (sysfs_streq(buf, "offline")) + cxlr_sysram->online_type = MMOP_OFFLINE; + else if (sysfs_streq(buf, "online")) + cxlr_sysram->online_type = MMOP_ONLINE; + else if (sysfs_streq(buf, "online_movable")) + cxlr_sysram->online_type = MMOP_ONLINE_MOVABLE; + else + return -EINVAL; + + return len; +} + +static DEVICE_ATTR_RW(online_type); + +static struct attribute *cxl_sysram_region_attrs[] = { + &dev_attr_online_type.attr, + NULL, +}; + +static const struct attribute_group cxl_sysram_region_attribute_group = { + .attrs = cxl_sysram_region_attrs, +}; + +static const struct attribute_group *cxl_sysram_region_attribute_groups[] = { + &cxl_base_attribute_group, + &cxl_sysram_region_attribute_group, + NULL, +}; + +const struct device_type cxl_sysram_region_type = { + .name = "cxl_sysram_region", + .release = cxl_sysram_region_release, + .groups = cxl_sysram_region_attribute_groups, +}; + +static bool is_cxl_sysram_region(struct device *dev) +{ + return dev->type == &cxl_sysram_region_type; +} + +struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_sysram_region(dev), + "not a cxl_sysram_region device\n")) + return NULL; + return container_of(dev, struct cxl_sysram_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_sysram_region, "CXL"); + +static struct lock_class_key cxl_sysram_region_key; + +static struct cxl_sysram_region *cxl_sysram_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_sysram_region *cxlr_sysram; + struct device *dev; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return ERR_PTR(-ENXIO); + + cxlr_sysram = kzalloc(sizeof(*cxlr_sysram), GFP_KERNEL); + if (!cxlr_sysram) + return ERR_PTR(-ENOMEM); + + cxlr_sysram->hpa_range.start = p->res->start; + cxlr_sysram->hpa_range.end = p->res->end; + cxlr_sysram->online_type = -1; /* Require explicit configuration */ + + dev = &cxlr_sysram->dev; + cxlr_sysram->cxlr = cxlr; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_sysram_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_sysram_region_type; + + return cxlr_sysram; +} + +static void cxlr_sysram_unregister(void *_cxlr_sysram) +{ + struct cxl_sysram_region *cxlr_sysram = _cxlr_sysram; + + device_unregister(&cxlr_sysram->dev); +} + +int devm_cxl_add_sysram_region(struct cxl_region *cxlr) +{ + struct cxl_sysram_region *cxlr_sysram; + struct device *dev; + int rc; + + cxlr_sysram = cxl_sysram_region_alloc(cxlr); + if (IS_ERR(cxlr_sysram)) + return PTR_ERR(cxlr_sysram); + + dev = &cxlr_sysram->dev; + rc = dev_set_name(dev, "sysram_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + return devm_add_action_or_reset(&cxlr->dev, cxlr_sysram_unregister, + cxlr_sysram); +err: + put_device(dev); + return rc; +} + +static int cxl_sysram_region_driver_probe(struct device *dev) +{ + struct cxl_region *cxlr = to_cxl_region(dev); + + /* Only handle RAM regions */ + if (cxlr->mode != CXL_PARTMODE_RAM) + return -ENODEV; + + return devm_cxl_add_sysram_region(cxlr); +} + +struct cxl_driver cxl_sysram_region_driver = { + .name = "cxl_sysram_region", + .probe = cxl_sysram_region_driver_probe, + .id = CXL_DEVICE_REGION, +}; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 674d5f870c70..1544c27e9c89 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -596,6 +596,25 @@ struct cxl_dax_region { enum dax_driver_type dax_driver; }; +/** + * struct cxl_sysram_region - CXL RAM region for system memory hotplug + * @dev: device for this sysram_region + * @cxlr: parent cxl_region + * @hpa_range: Host physical address range for the region + * @online_type: Memory online type (MMOP_* 0-3, or -1 if not configured) + * + * Intermediate device that allows configuration of memory hotplug + * parameters before the underlying dax_region is created. The device + * starts with online_type=-1 which prevents the cxl_dax_kmem_region + * driver from binding until the user explicitly sets online_type. + */ +struct cxl_sysram_region { + struct device dev; + struct cxl_region *cxlr; + struct range hpa_range; + int online_type; +}; + /** * struct cxl_port - logical collection of upstream port devices and * downstream port devices to construct a CXL memory @@ -890,6 +909,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv); #define CXL_DEVICE_PMEM_REGION 7 #define CXL_DEVICE_DAX_REGION 8 #define CXL_DEVICE_PMU 9 +#define CXL_DEVICE_SYSRAM_REGION 10 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") #define CXL_MODALIAS_FMT "cxl:t%d" @@ -907,6 +927,7 @@ bool is_cxl_pmem_region(struct device *dev); struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev); int cxl_add_to_region(struct cxl_endpoint_decoder *cxled); struct cxl_dax_region *to_cxl_dax_region(struct device *dev); +struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev); u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa); #else static inline bool is_cxl_pmem_region(struct device *dev) @@ -925,6 +946,10 @@ static inline struct cxl_dax_region *to_cxl_dax_region(struct device *dev) { return NULL; } +static inline struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev) +{ + return NULL; +} static inline u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa) { -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:41 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Annoyingly, my email client has been truncating my titles: cxl: explicit DAX driver selection and hotplug policy for CXL regions ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:17:55 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, Jan 29, 2026 at 04:04:33PM -0500, Gregory Price wrote: Looks like build regression on configs without hotplug MMOP_ defines and mhp_get_default_online_type() undefined Will let this version sit for a bit before spinning a v2 ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 30 Jan 2026 12:34:33 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On 1/29/2026 3:04 PM, Gregory Price wrote: This technically comes up in the devdax_region driver patch first, but I noticed it here so this is where I'm putting it: I like the idea here, but the implementation is all off. Firstly, devm_cxl_add_sysram_region() is never called outside of sysram_region_driver::probe(), so I'm not sure how they ever get added to the system (same with devdax regions). Second, there's this weird pattern of adding sub-region (sysram, devdax, etc.) devices being added inside of the sub-region driver probe. I would expect the devices are added then the probe function is called. What I think should be going on here (and correct me if I'm wrong) is: 1. a cxl_region device is added to the system 2. cxl_region::probe() is called on said device (one in cxl/core/region.c) 3. Said probe function figures out the device is a dax_region or whatever else and creates that type of region device (i.e. cxl_region::probe() -> device_add(&cxl_sysram_device)) 4. if the device's dax driver type is DAXDRV_DEVICE_TYPE it gets sent to the daxdev_region driver 5a. if the device's dax driver type is DAXDRV_KMEM_TYPE it gets sent to the sysram_region driver which holds it until the online_type is set 5b. Once the online_type is set, the device is forwarded to the dax_kmem_region driver? Not sure on this part What seems to be happening is that the cxl_region is added, all of these region drivers try to bind to it since they all use the same device id (CXL_DEVICE_REGION) and the correct one is figured out by magic? I'm somewhat confused at this point :/. This should be removed from the valid values section since it's not a valid value to write to the attribute. The mention of the default in the paragraph below should be enough. You can use cleanup.h here to remove the goto's (I think). Following should work: #DEFINE_FREE(cxlr_dax_region_put, struct cxl_dax_region *, if (!IS_ERR_OR_NULL(_T)) put_device(&cxlr_dax->dev)) static int cxl_dax_kmem_region_driver_probe(struct device *dev) { ... struct cxl_dax_region *cxlr_dax __free(cxlr_dax_region_put) = cxl_dax_region_alloc(cxlr); if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); ... rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id); if (rc) return rc; rc = device_add(&cxlr_dax->dev); if (rc) return rc; dev_dbg(dev, "%s: register %s\n", dev_name(dev), dev_name(&cxlr_dax->dev)); return devm_add_action_or_reset(dev, cxlr_dax_unregister, no_free_ptr(cxlr_dax)); } Same thing as above Thanks, Ben
{ "author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>", "date": "Fri, 30 Jan 2026 15:27:12 -0600", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Fri, Jan 30, 2026 at 03:27:12PM -0600, Cheatham, Benjamin wrote: I originally tried doing with region0/region_driver, but that design pattern is also confusing - and it creates differently bad patterns. echo region0 > decoder0.0/create_ram_region -> creates region0 # Current pattern echo region > driver/region/probe /* auto-region behavior */ # region_driver attribute pattern echo "sysram" > region0/region_driver echo region0 > driver/region/probe /* uses sysram region driver */ https://lore.kernel.org/linux-cxl/20260113202138.3021093-1-gourry@gourry.net/ Ira pointed out that this design makes the "implicit" design of the driver worse. The user doesn't actually know what driver is being used under the hood - it just knows something is being used. This at least makes it explicit which driver is being used - and splits the uses-case logic up into discrete drivers (dax users don't have to worry about sysram users breaking their stuff). If it makes more sense, you could swap the ordering of the names echo region0 > region/bind echo region0 > region_sysram/bind echo region0 > region_daxdev/bind echo region0 > region_dax_kmem/bind echo region0 > region_pony/bind --- The underlying issue is that region::probe() is trying to be a god-function for every possible use case, and hiding the use case behind an attribute vs a driver is not good. (also the default behavior for region::probe() in an otherwise unconfigured region is required for backwards compatibility) For auto-regions: region_probe() eats it and you get the default behavior. For non-auto regions: create_x_region generates an un-configured region and fails to probe until the user commits it and probes it. auto-regions are evil and should be discouraged. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 30 Jan 2026 17:12:50 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On 1/30/2026 4:12 PM, Gregory Price wrote: Ok, that makes sense. I think I just got lost in the sauce while looking at this last week and this explanation helped a lot.> I think this was the source of my misunderstanding. I was trying to understand how it works for auto regions when it's never meant to apply to them. Sorry if this is a stupid question, but what stops auto regions from binding to the sysram/dax region drivers? They all bind to region devices, so I assume there's something keeping them from binding before the core region driver gets a chance. Thanks, Ben
{ "author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>", "date": "Mon, 2 Feb 2026 11:02:37 -0600", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:34 -0500 Gregory Price <gourry@gourry.net> wrote: Trivial comment inline. I don't really care either way. Pushing the policy up to the caller and ensuring it's explicitly constant for all the memory blocks (as opposed to relying on locks) seems sensible to me even without anything else. Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Maybe move the local variable outside the loop to avoid the double call.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:10:29 +0000", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:35 -0500 Gregory Price <gourry@gourry.net> wrote: Hi Gregory, I think maybe I'd have left the export for the first user outside of memory_hotplug.c. Not particularly important however. Maybe talk about why a caller of __add_memory_driver_managed() might want the default? Feels like that's for the people who don't... Or is this all a dance to avoid an if (special mode) __add_memory_driver_managed(); else add_memory_driver_managed(); ? Other comments are mostly about using a named enum. I'm not sure if there is some existing reason why that doesn't work? -Errno pushed through this variable or anything like that? Given online_type values are from an enum anyway, maybe we can name that enum and use it explicitly? Ah. Fair enough, ignore comment in previous patch. I should have read on... It's a little odd to add nice kernel-doc formatted documentation when the non __ variant has free form docs. Maybe tidy that up first if we want to go kernel-doc in this file? (I'm in favor, but no idea on general feelings...) Given that's currently the full set, seems like enum wins out here over an int. This is where using an enum would help compiler know what is going on and maybe warn if anyone writes something that isn't defined.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:25:24 +0000", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 11:02:37AM -0600, Cheatham, Benjamin wrote: Auto regions explicitly use the dax_kmem path (all existing code, unchanged)- which auto-plugs into dax/hotplug. I do get what you're saying that everything binds on a region type, I will look a little closer at this and see if there's something more reasonable we can do. I think i can update `region/bind` to use the sysram driver with online_type=mhp_default_online_type so you'd end up with effective the auto-region logic: cxlcli create-region -m ram ... existing argument set ------ echo region0 > create_ram_region /* program decoders */ echo region0 > region/bind /* * region_bind(): * 1) alloc sysram_region object * 2) sysram_regionN->online_type=mhp_default_online_type() * 3) add device to bus * 4) device auto-probes all the way down to dax * 5) dax auto-onlines with system default setting */ ------ and Non-auto-region logic (approximation) cxlcli creation-region -m ram --type sysram --online-type=movable ----- echo region0 > create_ram_region /* program decoders */ echo region0 > sysram/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > dax_kmem/bind ----- I want to retain the dax_kmem driver because there may be multiple users other than sysram. For example, a compressed memory region wants to utilize dax_kmem, but has its own complex policy (via N_MEMORY_PRIVATE) so it doesn't want to abstract through sysram_region, but it does want to abstract through dax_kmem. weeeee "software defined memory" weeeee ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 12:41:31 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 05:10:29PM +0000, Jonathan Cameron wrote: ack. will update for next version w/ Ben's notes and the build fix. Thanks! ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 12:46:25 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:37 -0500 Gregory Price <gourry@gourry.net> wrote: LGTM Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:54:17 +0000", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:38 -0500 Gregory Price <gourry@gourry.net> wrote: Needs to answer the question: Why? Minor stuff inline. Maybe sneak in dropping that trailing comma whilst you are moving it. ... Bonus line...
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:56:40 +0000", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:39 -0500 Gregory Price <gourry@gourry.net> wrote: Likewise. Why?
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:57:11 +0000", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 05:25:24PM +0000, Jonathan Cameron wrote: Less about why they want the default, more about maintaining backward compatibility. In the cxl driver, Ben pointed out something that made me realize we can change `region/bind()` to actually use the new `sysram/bind` path by just adding a one line `sysram_regionN->online_type = default()` I can add this detail to the changelog. I can add a cleanup-patch prior to use the enum, but i don't think this actually enables the compiler to do anything new at the moment? An enum just resolves to an int, and setting `enum thing val = -1` when the enum definition doesn't include -1 doesn't actually fire any errors (at least IIRC - maybe i'm just wrong). Same with function(enum) -> function(-1) wouldn't fire a compilation error It might actually be worth adding `MMOP_NOT_CONFIGURED = -1` so that the cxl-sysram driver can set this explicitly rather than just setting -1 as an implicit version of this - but then why would memory_hotplug.c ever want to expose a NOT_CONFIGURED option lol. So, yeah, the enum looks nicer, but not sure how much it buys us beyond that. ack. Can add some more cleanups early in the series. I think you still have to sanity check this, but maybe the code looks cleaner, so will do. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 13:02:10 -0500", "thread_id": "aYDi8bhxFnvMWl11@gourry-fedora-PF4VCD3F.mbox.gz" }
lkml
[PATCHv3] dmaengine: tegra210-adma: use devm_platform_ioremap_resource
Simpler to call the proper function. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> --- v3: reword title v2: reword commit message drivers/dma/tegra210-adma.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c index 215bfef37ec6..5353fbb3d995 100644 --- a/drivers/dma/tegra210-adma.c +++ b/drivers/dma/tegra210-adma.c @@ -1073,14 +1073,9 @@ static int tegra_adma_probe(struct platform_device *pdev) } } else { /* If no 'page' property found, then reg DT binding would be legacy */ - res_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); - if (res_base) { - tdma->base_addr = devm_ioremap_resource(&pdev->dev, res_base); - if (IS_ERR(tdma->base_addr)) - return PTR_ERR(tdma->base_addr); - } else { - return -ENODEV; - } + tdma->base_addr = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(tdma->base_addr)) + return PTR_ERR(tdma->base_addr); tdma->ch_base_addr = tdma->base_addr + cdata->ch_base_offset; } -- 2.52.0
On Sun, Feb 01, 2026 at 07:44:19PM -0800, Rosen Penev wrote: subject: function name need () dmaengine: tegra210-adma: use devm_platform_ioremap_resource() to simplify code Use devm_platform_ioremap_resource() to simplify code. No funcational change. Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Mon, 2 Feb 2026 11:46:30 -0500", "thread_id": "aYDU5na1HxkPRvwR@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH -next v2 0/4] cgroup/dmem: bugfixes
From: Chen Ridong <chenridong@huawei.com> This patch series addresses three existing bugs in the cgroup/dmem subsystem and adds a safety check for an exported API to prevent misuse. --- v2: - patch 1/4: fix uninitialized region. - patch 4/4: new patch adding check for exported API. Chen Ridong (4): cgroup/dmem: fix NULL pointer dereference when setting max cgroup/dmem: avoid rcu warning when unregister region cgroup/dmem: avoid pool UAF cgroup/dmem: add argument checks in helpers kernel/cgroup/dmem.c | 85 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 76 insertions(+), 9 deletions(-) -- 2.34.1
From: Chen Ridong <chenridong@huawei.com> Add WARN_ON_ONCE guards for NULL-sensitive arguments in dmem helpers to avoid NULL dereferences on misused APIs. Valid callers are unaffected. Signed-off-by: Chen Ridong <chenridong@huawei.com> --- kernel/cgroup/dmem.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c index 1ea6afffa985..aa5bacf5fe45 100644 --- a/kernel/cgroup/dmem.c +++ b/kernel/cgroup/dmem.c @@ -307,6 +307,9 @@ bool dmem_cgroup_state_evict_valuable(struct dmem_cgroup_pool_state *limit_pool, struct page_counter *ctest; u64 used, min, low; + if (WARN_ON_ONCE(!test_pool)) + return false; + /* Can always evict from current pool, despite limits */ if (limit_pool == test_pool) return true; @@ -343,7 +346,8 @@ bool dmem_cgroup_state_evict_valuable(struct dmem_cgroup_pool_state *limit_pool, low = READ_ONCE(ctest->elow); if (used > low) return true; - + if (WARN_ON_ONCE(!ret_hit_low)) + return false; *ret_hit_low = true; return false; } @@ -512,7 +516,7 @@ struct dmem_cgroup_region *dmem_cgroup_register_region(u64 size, const char *fmt char *region_name; va_list ap; - if (!size) + if (WARN_ON_ONCE(!size || !fmt)) return NULL; va_start(ap, fmt); @@ -520,6 +524,10 @@ struct dmem_cgroup_region *dmem_cgroup_register_region(u64 size, const char *fmt va_end(ap); if (!region_name) return ERR_PTR(-ENOMEM); + if (WARN_ON_ONCE(!region_name[0])) { + kfree(region_name); + return ERR_PTR(-EINVAL); + } ret = kzalloc(sizeof(*ret), GFP_KERNEL); if (!ret) { @@ -657,6 +665,9 @@ int dmem_cgroup_try_charge(struct dmem_cgroup_region *region, u64 size, struct page_counter *fail; int ret; + if (WARN_ON_ONCE(!region || !ret_pool)) + return -EINVAL; + *ret_pool = NULL; if (ret_limit_pool) *ret_limit_pool = NULL; -- 2.34.1
{ "author": "Chen Ridong <chenridong@huaweicloud.com>", "date": "Mon, 2 Feb 2026 12:27:19 +0000", "thread_id": "a3f6882172c7a1d1e335259675cc8ff5@kernel.org.mbox.gz" }
lkml
[PATCH -next v2 0/4] cgroup/dmem: bugfixes
From: Chen Ridong <chenridong@huawei.com> This patch series addresses three existing bugs in the cgroup/dmem subsystem and adds a safety check for an exported API to prevent misuse. --- v2: - patch 1/4: fix uninitialized region. - patch 4/4: new patch adding check for exported API. Chen Ridong (4): cgroup/dmem: fix NULL pointer dereference when setting max cgroup/dmem: avoid rcu warning when unregister region cgroup/dmem: avoid pool UAF cgroup/dmem: add argument checks in helpers kernel/cgroup/dmem.c | 85 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 76 insertions(+), 9 deletions(-) -- 2.34.1
From: Chen Ridong <chenridong@huawei.com> A warnning was detected: WARNING: suspicious RCU usage 6.19.0-rc7-next-20260129+ #1101 Tainted: G O kernel/cgroup/dmem.c:456 suspicious rcu_dereference_check() usage! other info that might help us debug this: rcu_scheduler_active = 2, debug_locks = 1 1 lock held by insmod/532: #0: ffffffff85e78b38 (dmemcg_lock){+.+.}-dmem_cgroup_unregister_region+ stack backtrace: CPU: 2 UID: 0 PID: 532 Comm: insmod Tainted: 6.19.0-rc7-next- Tainted: [O]=OOT_MODULE Call Trace: <TASK> dump_stack_lvl+0xb0/0xd0 lockdep_rcu_suspicious+0x151/0x1c0 dmem_cgroup_unregister_region+0x1e2/0x380 ? __pfx_dmem_test_init+0x10/0x10 [dmem_uaf] dmem_test_init+0x65/0xff0 [dmem_uaf] do_one_initcall+0xbb/0x3a0 The macro list_for_each_rcu() must be used within an RCU read-side critical section (between rcu_read_lock() and rcu_read_unlock()). Using it outside that context, as seen in dmem_cgroup_unregister_region(), triggers the lockdep warning because the RCU protection is not guaranteed. Replace list_for_each_rcu() with list_for_each_entry_safe(), which is appropriate for traversal under spinlock protection where nodes may be deleted. Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup") Signed-off-by: Chen Ridong <chenridong@huawei.com> --- kernel/cgroup/dmem.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c index 1f0d6caaf2fb..787b334e0f5d 100644 --- a/kernel/cgroup/dmem.c +++ b/kernel/cgroup/dmem.c @@ -423,7 +423,7 @@ static void dmemcg_free_region(struct kref *ref) */ void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region) { - struct list_head *entry; + struct dmem_cgroup_pool_state *pool, *next; if (!region) return; @@ -433,10 +433,7 @@ void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region) /* Remove from global region list */ list_del_rcu(&region->region_node); - list_for_each_rcu(entry, &region->pools) { - struct dmem_cgroup_pool_state *pool = - container_of(entry, typeof(*pool), region_node); - + list_for_each_entry_safe(pool, next, &region->pools, region_node) { list_del_rcu(&pool->css_node); } -- 2.34.1
{ "author": "Chen Ridong <chenridong@huaweicloud.com>", "date": "Mon, 2 Feb 2026 12:27:17 +0000", "thread_id": "a3f6882172c7a1d1e335259675cc8ff5@kernel.org.mbox.gz" }
lkml
[PATCH -next v2 0/4] cgroup/dmem: bugfixes
From: Chen Ridong <chenridong@huawei.com> This patch series addresses three existing bugs in the cgroup/dmem subsystem and adds a safety check for an exported API to prevent misuse. --- v2: - patch 1/4: fix uninitialized region. - patch 4/4: new patch adding check for exported API. Chen Ridong (4): cgroup/dmem: fix NULL pointer dereference when setting max cgroup/dmem: avoid rcu warning when unregister region cgroup/dmem: avoid pool UAF cgroup/dmem: add argument checks in helpers kernel/cgroup/dmem.c | 85 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 76 insertions(+), 9 deletions(-) -- 2.34.1
From: Chen Ridong <chenridong@huawei.com> An issue was triggered: BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: Oops: 0000 [#1] SMP NOPTI CPU: 15 UID: 0 PID: 658 Comm: bash Tainted: 6.19.0-rc6-next-2026012 Tainted: [O]=OOT_MODULE Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), RIP: 0010:strcmp+0x10/0x30 RSP: 0018:ffffc900017f7dc0 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff888107cd4358 RDX: 0000000019f73907 RSI: ffffffff82cc381a RDI: 0000000000000000 RBP: ffff8881016bef0d R08: 000000006c0e7145 R09: 0000000056c0e714 R10: 0000000000000001 R11: ffff888107cd4358 R12: 0007ffffffffffff R13: ffff888101399200 R14: ffff888100fcb360 R15: 0007ffffffffffff CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 0000000105c79000 CR4: 00000000000006f0 Call Trace: <TASK> dmemcg_limit_write.constprop.0+0x16d/0x390 ? __pfx_set_resource_max+0x10/0x10 kernfs_fop_write_iter+0x14e/0x200 vfs_write+0x367/0x510 ksys_write+0x66/0xe0 do_syscall_64+0x6b/0x390 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f42697e1887 It was trriggered setting max without limitation, the command is like: "echo test/region0 > dmem.max". To fix this issue, add check whether options is valid after parsing the region_name. Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup") Signed-off-by: Chen Ridong <chenridong@huawei.com> --- kernel/cgroup/dmem.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c index e12b946278b6..1f0d6caaf2fb 100644 --- a/kernel/cgroup/dmem.c +++ b/kernel/cgroup/dmem.c @@ -700,6 +700,9 @@ static ssize_t dmemcg_limit_write(struct kernfs_open_file *of, if (!region_name[0]) continue; + if (!options || !*options) + return -EINVAL; + rcu_read_lock(); region = dmemcg_get_region_by_name(region_name); rcu_read_unlock(); -- 2.34.1
{ "author": "Chen Ridong <chenridong@huaweicloud.com>", "date": "Mon, 2 Feb 2026 12:27:16 +0000", "thread_id": "a3f6882172c7a1d1e335259675cc8ff5@kernel.org.mbox.gz" }
lkml
[PATCH -next v2 0/4] cgroup/dmem: bugfixes
From: Chen Ridong <chenridong@huawei.com> This patch series addresses three existing bugs in the cgroup/dmem subsystem and adds a safety check for an exported API to prevent misuse. --- v2: - patch 1/4: fix uninitialized region. - patch 4/4: new patch adding check for exported API. Chen Ridong (4): cgroup/dmem: fix NULL pointer dereference when setting max cgroup/dmem: avoid rcu warning when unregister region cgroup/dmem: avoid pool UAF cgroup/dmem: add argument checks in helpers kernel/cgroup/dmem.c | 85 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 76 insertions(+), 9 deletions(-) -- 2.34.1
From: Chen Ridong <chenridong@huawei.com> An UAF issue was observed: BUG: KASAN: slab-use-after-free in page_counter_uncharge+0x65/0x150 Write of size 8 at addr ffff888106715440 by task insmod/527 CPU: 4 UID: 0 PID: 527 Comm: insmod 6.19.0-rc7-next-20260129+ #11 Tainted: [O]=OOT_MODULE Call Trace: <TASK> dump_stack_lvl+0x82/0xd0 kasan_report+0xca/0x100 kasan_check_range+0x39/0x1c0 page_counter_uncharge+0x65/0x150 dmem_cgroup_uncharge+0x1f/0x260 Allocated by task 527: Freed by task 0: The buggy address belongs to the object at ffff888106715400 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 64 bytes inside of freed 512-byte region [ffff888106715400, ffff888106715600) The buggy address belongs to the physical page: Memory state around the buggy address: ffff888106715300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888106715380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ^ ffff888106715480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888106715500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb The issue occurs because a pool can still be held by a caller after its associated memory region is unregistered. The current implementation frees the pool even if users still hold references to it (e.g., before uncharge operations complete). This patch adds a reference counter to each pool, ensuring that a pool is only freed when its reference count drops to zero. Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup") Signed-off-by: Chen Ridong <chenridong@huawei.com> --- kernel/cgroup/dmem.c | 60 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 58 insertions(+), 2 deletions(-) diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c index 787b334e0f5d..1ea6afffa985 100644 --- a/kernel/cgroup/dmem.c +++ b/kernel/cgroup/dmem.c @@ -14,6 +14,7 @@ #include <linux/mutex.h> #include <linux/page_counter.h> #include <linux/parser.h> +#include <linux/refcount.h> #include <linux/rculist.h> #include <linux/slab.h> @@ -71,7 +72,9 @@ struct dmem_cgroup_pool_state { struct rcu_head rcu; struct page_counter cnt; + struct dmem_cgroup_pool_state *parent; + refcount_t ref; bool inited; }; @@ -88,6 +91,9 @@ struct dmem_cgroup_pool_state { static DEFINE_SPINLOCK(dmemcg_lock); static LIST_HEAD(dmem_cgroup_regions); +static void dmemcg_free_region(struct kref *ref); +static void dmemcg_pool_free_rcu(struct rcu_head *rcu); + static inline struct dmemcg_state * css_to_dmemcs(struct cgroup_subsys_state *css) { @@ -104,10 +110,38 @@ static struct dmemcg_state *parent_dmemcs(struct dmemcg_state *cg) return cg->css.parent ? css_to_dmemcs(cg->css.parent) : NULL; } +static void dmemcg_pool_get(struct dmem_cgroup_pool_state *pool) +{ + refcount_inc(&pool->ref); +} + +static bool dmemcg_pool_tryget(struct dmem_cgroup_pool_state *pool) +{ + return refcount_inc_not_zero(&pool->ref); +} + +static void dmemcg_pool_put(struct dmem_cgroup_pool_state *pool) +{ + if (!refcount_dec_and_test(&pool->ref)) + return; + + call_rcu(&pool->rcu, dmemcg_pool_free_rcu); +} + +static void dmemcg_pool_free_rcu(struct rcu_head *rcu) +{ + struct dmem_cgroup_pool_state *pool = container_of(rcu, typeof(*pool), rcu); + + if (pool->parent) + dmemcg_pool_put(pool->parent); + kref_put(&pool->region->ref, dmemcg_free_region); + kfree(pool); +} + static void free_cg_pool(struct dmem_cgroup_pool_state *pool) { list_del(&pool->region_node); - kfree(pool); + dmemcg_pool_put(pool); } static void @@ -342,6 +376,12 @@ alloc_pool_single(struct dmemcg_state *dmemcs, struct dmem_cgroup_region *region page_counter_init(&pool->cnt, ppool ? &ppool->cnt : NULL, true); reset_all_resource_limits(pool); + refcount_set(&pool->ref, 1); + kref_get(&region->ref); + if (ppool && !pool->parent) { + pool->parent = ppool; + dmemcg_pool_get(ppool); + } list_add_tail_rcu(&pool->css_node, &dmemcs->pools); list_add_tail(&pool->region_node, &region->pools); @@ -389,6 +429,10 @@ get_cg_pool_locked(struct dmemcg_state *dmemcs, struct dmem_cgroup_region *regio /* Fix up parent links, mark as inited. */ pool->cnt.parent = &ppool->cnt; + if (ppool && !pool->parent) { + pool->parent = ppool; + dmemcg_pool_get(ppool); + } pool->inited = true; pool = ppool; @@ -435,6 +479,8 @@ void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region) list_for_each_entry_safe(pool, next, &region->pools, region_node) { list_del_rcu(&pool->css_node); + list_del(&pool->region_node); + dmemcg_pool_put(pool); } /* @@ -515,8 +561,10 @@ static struct dmem_cgroup_region *dmemcg_get_region_by_name(const char *name) */ void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool) { - if (pool) + if (pool) { css_put(&pool->cs->css); + dmemcg_pool_put(pool); + } } EXPORT_SYMBOL_GPL(dmem_cgroup_pool_state_put); @@ -530,6 +578,8 @@ get_cg_pool_unlocked(struct dmemcg_state *cg, struct dmem_cgroup_region *region) pool = find_cg_pool_locked(cg, region); if (pool && !READ_ONCE(pool->inited)) pool = NULL; + if (pool && !dmemcg_pool_tryget(pool)) + pool = NULL; rcu_read_unlock(); while (!pool) { @@ -538,6 +588,8 @@ get_cg_pool_unlocked(struct dmemcg_state *cg, struct dmem_cgroup_region *region) pool = get_cg_pool_locked(cg, region, &allocpool); else pool = ERR_PTR(-ENODEV); + if (!IS_ERR(pool)) + dmemcg_pool_get(pool); spin_unlock(&dmemcg_lock); if (pool == ERR_PTR(-ENOMEM)) { @@ -573,6 +625,7 @@ void dmem_cgroup_uncharge(struct dmem_cgroup_pool_state *pool, u64 size) page_counter_uncharge(&pool->cnt, size); css_put(&pool->cs->css); + dmemcg_pool_put(pool); } EXPORT_SYMBOL_GPL(dmem_cgroup_uncharge); @@ -624,7 +677,9 @@ int dmem_cgroup_try_charge(struct dmem_cgroup_region *region, u64 size, if (ret_limit_pool) { *ret_limit_pool = container_of(fail, struct dmem_cgroup_pool_state, cnt); css_get(&(*ret_limit_pool)->cs->css); + dmemcg_pool_get(*ret_limit_pool); } + dmemcg_pool_put(pool); ret = -EAGAIN; goto err; } @@ -719,6 +774,7 @@ static ssize_t dmemcg_limit_write(struct kernfs_open_file *of, /* And commit */ apply(pool, new_limit); + dmemcg_pool_put(pool); out_put: kref_put(&region->ref, dmemcg_free_region); -- 2.34.1
{ "author": "Chen Ridong <chenridong@huaweicloud.com>", "date": "Mon, 2 Feb 2026 12:27:18 +0000", "thread_id": "a3f6882172c7a1d1e335259675cc8ff5@kernel.org.mbox.gz" }
lkml
[PATCH -next v2 0/4] cgroup/dmem: bugfixes
From: Chen Ridong <chenridong@huawei.com> This patch series addresses three existing bugs in the cgroup/dmem subsystem and adds a safety check for an exported API to prevent misuse. --- v2: - patch 1/4: fix uninitialized region. - patch 4/4: new patch adding check for exported API. Chen Ridong (4): cgroup/dmem: fix NULL pointer dereference when setting max cgroup/dmem: avoid rcu warning when unregister region cgroup/dmem: avoid pool UAF cgroup/dmem: add argument checks in helpers kernel/cgroup/dmem.c | 85 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 76 insertions(+), 9 deletions(-) -- 2.34.1
Applied 1-3 to cgroup/for-6.19-fixes w/ stable tags added. I dropped 4/4 as we don't want this kind of blanket input validation unless there are specific reasons to do so. Thanks. -- tejun
{ "author": "Tejun Heo <tj@kernel.org>", "date": "Mon, 02 Feb 2026 06:17:40 -1000", "thread_id": "a3f6882172c7a1d1e335259675cc8ff5@kernel.org.mbox.gz" }
lkml
[PATCH v3] docs: ja_JP: process: translate 'Obtain a current source tree'
Translate the "Obtain a current source tree" section in Documentation/translations/ja_JP/process/submitting-patches.rst. Signed-off-by: Akiyoshi Kurita <weibu@redadmin.org> --- .../ja_JP/process/submitting-patches.rst | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/Documentation/translations/ja_JP/process/submitting-patches.rst b/Documentation/translations/ja_JP/process/submitting-patches.rst index 2ff887c86b2a..d61583399ef4 100644 --- a/Documentation/translations/ja_JP/process/submitting-patches.rst +++ b/Documentation/translations/ja_JP/process/submitting-patches.rst @@ -36,3 +36,21 @@ Documentation/devicetree/bindings/submitting-patches.rst を読んでくださ いくつかのサブシステムやメンテナツリーには、各々のワークフローや 期待事項に関する追加情報があります。次を参照してください: :ref:`Documentation/process/maintainer-handbooks.rst <maintainer_handbooks_main>`. + +現在のソースツリーを入手する +---------------------------- + +もし手元に最新のカーネルソースのリポジトリがなければ、``git`` を使って取得して +ください。まずは mainline のリポジトリから始めるのがよいでしょう。これは +次のようにして取得できます:: + + git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git + +ただし、直接 mainline のツリーを対象に作業すればよいとは限らないことに注意 +してください。多くのサブシステムのメンテナはそれぞれ独自のツリーを運用しており、 +そのツリーに対して作成されたパッチを見たいと考えています。該当サブシステムの +ツリーは MAINTAINERS ファイル内の **T:** エントリを参照して見つけてください。 +そこに掲載されていない場合は、メンテナに問い合わせてください。 + +変更内容を説明する +------------------ base-commit: a592a36e49372172d7c7551ec19ed18184c935e1 -- 2.47.3
On Tue, 27 Jan 2026 22:01:55 +0900, Akiyoshi Kurita wrote: I failed to catch this in v2, but, I think you wanted to say: Translate the "Obtain a current source tree" section in Documentation/process/submitting-patches.rst. , no? That said, I don't think this nit-pick is worth a respin. So, Reviewed-by: Akira Yokosawa <akiyks@gmail.com> Thanks, Akira
{ "author": "Akira Yokosawa <akiyks@gmail.com>", "date": "Wed, 28 Jan 2026 16:43:10 +0900", "thread_id": "87fr7jrsna.fsf@trenco.lwn.net.mbox.gz" }
lkml
[PATCH v3] docs: ja_JP: process: translate 'Obtain a current source tree'
Translate the "Obtain a current source tree" section in Documentation/translations/ja_JP/process/submitting-patches.rst. Signed-off-by: Akiyoshi Kurita <weibu@redadmin.org> --- .../ja_JP/process/submitting-patches.rst | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/Documentation/translations/ja_JP/process/submitting-patches.rst b/Documentation/translations/ja_JP/process/submitting-patches.rst index 2ff887c86b2a..d61583399ef4 100644 --- a/Documentation/translations/ja_JP/process/submitting-patches.rst +++ b/Documentation/translations/ja_JP/process/submitting-patches.rst @@ -36,3 +36,21 @@ Documentation/devicetree/bindings/submitting-patches.rst を読んでくださ いくつかのサブシステムやメンテナツリーには、各々のワークフローや 期待事項に関する追加情報があります。次を参照してください: :ref:`Documentation/process/maintainer-handbooks.rst <maintainer_handbooks_main>`. + +現在のソースツリーを入手する +---------------------------- + +もし手元に最新のカーネルソースのリポジトリがなければ、``git`` を使って取得して +ください。まずは mainline のリポジトリから始めるのがよいでしょう。これは +次のようにして取得できます:: + + git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git + +ただし、直接 mainline のツリーを対象に作業すればよいとは限らないことに注意 +してください。多くのサブシステムのメンテナはそれぞれ独自のツリーを運用しており、 +そのツリーに対して作成されたパッチを見たいと考えています。該当サブシステムの +ツリーは MAINTAINERS ファイル内の **T:** エントリを参照して見つけてください。 +そこに掲載されていない場合は、メンテナに問い合わせてください。 + +変更内容を説明する +------------------ base-commit: a592a36e49372172d7c7551ec19ed18184c935e1 -- 2.47.3
Akiyoshi Kurita <weibu@redadmin.org> writes: Applied, thanks. And Akira, thanks for reviewing these - I do appreciate it. jon
{ "author": "Jonathan Corbet <corbet@lwn.net>", "date": "Mon, 02 Feb 2026 09:56:25 -0700", "thread_id": "87fr7jrsna.fsf@trenco.lwn.net.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Move struct bpf_struct_ops_link's definition into bpf.h, where other custom bpf links definitions are. It's necessary to access its members from outside of generic bpf_struct_ops implementation, which will be done by following patches in the series. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/bpf.h | 6 ++++++ kernel/bpf/bpf_struct_ops.c | 6 ------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4427c6e98331..899dd911dc82 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1891,6 +1891,12 @@ struct bpf_raw_tp_link { u64 cookie; }; +struct bpf_struct_ops_link { + struct bpf_link link; + struct bpf_map __rcu *map; + wait_queue_head_t wait_hup; +}; + struct bpf_link_primer { struct bpf_link *link; struct file *file; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index c43346cb3d76..de01cf3025b3 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -55,12 +55,6 @@ struct bpf_struct_ops_map { struct bpf_struct_ops_value kvalue; }; -struct bpf_struct_ops_link { - struct bpf_link link; - struct bpf_map __rcu *map; - wait_queue_head_t wait_hup; -}; - static DEFINE_MUTEX(update_mutex); #define VALUE_PREFIX "bpf_struct_ops_" -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:04 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce an ability to attach bpf struct_ops'es to cgroups. a user passes a BPF_F_CGROUP_FD flag and specifies the target cgroup fd while creating a struct_ops link. As the result, the bpf struct_ops link will be created and attached to a cgroup. The cgroup.bpf structure maintains a list of attached struct ops links. If the cgroup is getting deleted, attached struct ops'es are getting auto-detached and the userspace program gets a notification. This change doesn't answer the question how bpf programs belonging to these struct ops'es will be executed. It will be done individually for every bpf struct ops which supports this. Please, note that unlike "normal" bpf programs, struct ops'es are not propagated to cgroup sub-trees. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/bpf-cgroup-defs.h | 3 ++ include/linux/bpf-cgroup.h | 16 +++++++++ include/linux/bpf.h | 3 ++ include/uapi/linux/bpf.h | 3 ++ kernel/bpf/bpf_struct_ops.c | 59 ++++++++++++++++++++++++++++++--- kernel/bpf/cgroup.c | 46 +++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + 7 files changed, 127 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h index c9e6b26abab6..6c5e37190dad 100644 --- a/include/linux/bpf-cgroup-defs.h +++ b/include/linux/bpf-cgroup-defs.h @@ -71,6 +71,9 @@ struct cgroup_bpf { /* temp storage for effective prog array used by prog_attach/detach */ struct bpf_prog_array *inactive; + /* list of bpf struct ops links */ + struct list_head struct_ops_links; + /* reference counter used to detach bpf programs after cgroup removal */ struct percpu_ref refcnt; diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h index 2f535331f926..a6c327257006 100644 --- a/include/linux/bpf-cgroup.h +++ b/include/linux/bpf-cgroup.h @@ -423,6 +423,11 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); int cgroup_bpf_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr); +int cgroup_bpf_attach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link); +void cgroup_bpf_detach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link); + const struct bpf_func_proto * cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog); #else @@ -451,6 +456,17 @@ static inline int cgroup_bpf_link_attach(const union bpf_attr *attr, return -EINVAL; } +static inline int cgroup_bpf_attach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ + return -EINVAL; +} + +static inline void cgroup_bpf_detach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ +} + static inline int cgroup_bpf_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr) { diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 899dd911dc82..391888eb257c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1894,6 +1894,9 @@ struct bpf_raw_tp_link { struct bpf_struct_ops_link { struct bpf_link link; struct bpf_map __rcu *map; + struct cgroup *cgroup; + bool cgroup_removed; + struct list_head list; wait_queue_head_t wait_hup; }; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 44e7dbc278e3..28544e8af1cd 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1237,6 +1237,7 @@ enum bpf_perf_event_type { #define BPF_F_AFTER (1U << 4) #define BPF_F_ID (1U << 5) #define BPF_F_PREORDER (1U << 6) +#define BPF_F_CGROUP_FD (1U << 7) #define BPF_F_LINK BPF_F_LINK /* 1 << 13 */ /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the @@ -6775,6 +6776,8 @@ struct bpf_link_info { } xdp; struct { __u32 map_id; + __u32 :32; + __u64 cgroup_id; } struct_ops; struct { __u32 pf; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index de01cf3025b3..2e361e22cfa0 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -13,6 +13,8 @@ #include <linux/btf_ids.h> #include <linux/rcupdate_wait.h> #include <linux/poll.h> +#include <linux/bpf-cgroup.h> +#include <linux/cgroup.h> struct bpf_struct_ops_value { struct bpf_struct_ops_common_value common; @@ -1220,6 +1222,10 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link) st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data, link); bpf_map_put(&st_map->map); } + + if (st_link->cgroup) + cgroup_bpf_detach_struct_ops(st_link->cgroup, st_link); + kfree(st_link); } @@ -1228,6 +1234,7 @@ static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, { struct bpf_struct_ops_link *st_link; struct bpf_map *map; + u64 cgrp_id = 0; st_link = container_of(link, struct bpf_struct_ops_link, link); rcu_read_lock(); @@ -1235,6 +1242,14 @@ static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, if (map) seq_printf(seq, "map_id:\t%d\n", map->id); rcu_read_unlock(); + + cgroup_lock(); + if (st_link->cgroup) + cgrp_id = cgroup_id(st_link->cgroup); + cgroup_unlock(); + + if (cgrp_id) + seq_printf(seq, "cgroup_id:\t%llu\n", cgrp_id); } static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, @@ -1242,6 +1257,7 @@ static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, { struct bpf_struct_ops_link *st_link; struct bpf_map *map; + u64 cgrp_id = 0; st_link = container_of(link, struct bpf_struct_ops_link, link); rcu_read_lock(); @@ -1249,6 +1265,13 @@ static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, if (map) info->struct_ops.map_id = map->id; rcu_read_unlock(); + + cgroup_lock(); + if (st_link->cgroup) + cgrp_id = cgroup_id(st_link->cgroup); + cgroup_unlock(); + + info->struct_ops.cgroup_id = cgrp_id; return 0; } @@ -1327,6 +1350,9 @@ static int bpf_struct_ops_map_link_detach(struct bpf_link *link) mutex_unlock(&update_mutex); + if (st_link->cgroup) + cgroup_bpf_detach_struct_ops(st_link->cgroup, st_link); + wake_up_interruptible_poll(&st_link->wait_hup, EPOLLHUP); return 0; @@ -1339,6 +1365,9 @@ static __poll_t bpf_struct_ops_map_link_poll(struct file *file, poll_wait(file, &st_link->wait_hup, pts); + if (st_link->cgroup_removed) + return EPOLLHUP; + return rcu_access_pointer(st_link->map) ? 0 : EPOLLHUP; } @@ -1357,8 +1386,12 @@ int bpf_struct_ops_link_create(union bpf_attr *attr) struct bpf_link_primer link_primer; struct bpf_struct_ops_map *st_map; struct bpf_map *map; + struct cgroup *cgrp; int err; + if (attr->link_create.flags & ~BPF_F_CGROUP_FD) + return -EINVAL; + map = bpf_map_get(attr->link_create.map_fd); if (IS_ERR(map)) return PTR_ERR(map); @@ -1378,11 +1411,26 @@ int bpf_struct_ops_link_create(union bpf_attr *attr) bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_map_lops, NULL, attr->link_create.attach_type); + init_waitqueue_head(&link->wait_hup); + + if (attr->link_create.flags & BPF_F_CGROUP_FD) { + cgrp = cgroup_get_from_fd(attr->link_create.target_fd); + if (IS_ERR(cgrp)) { + err = PTR_ERR(cgrp); + goto err_out; + } + link->cgroup = cgrp; + err = cgroup_bpf_attach_struct_ops(cgrp, link); + if (err) { + cgroup_put(cgrp); + link->cgroup = NULL; + goto err_out; + } + } + err = bpf_link_prime(&link->link, &link_primer); if (err) - goto err_out; - - init_waitqueue_head(&link->wait_hup); + goto err_put_cgroup; /* Hold the update_mutex such that the subsystem cannot * do link->ops->detach() before the link is fully initialized. @@ -1393,13 +1441,16 @@ int bpf_struct_ops_link_create(union bpf_attr *attr) mutex_unlock(&update_mutex); bpf_link_cleanup(&link_primer); link = NULL; - goto err_out; + goto err_put_cgroup; } RCU_INIT_POINTER(link->map, map); mutex_unlock(&update_mutex); return bpf_link_settle(&link_primer); +err_put_cgroup: + if (link && link->cgroup) + cgroup_bpf_detach_struct_ops(link->cgroup, link); err_out: bpf_map_put(map); kfree(link); diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 69988af44b37..7b1903be6f69 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -16,6 +16,7 @@ #include <linux/bpf-cgroup.h> #include <linux/bpf_lsm.h> #include <linux/bpf_verifier.h> +#include <linux/poll.h> #include <net/sock.h> #include <net/bpf_sk_storage.h> @@ -307,12 +308,23 @@ static void cgroup_bpf_release(struct work_struct *work) bpf.release_work); struct bpf_prog_array *old_array; struct list_head *storages = &cgrp->bpf.storages; + struct bpf_struct_ops_link *st_link, *st_tmp; struct bpf_cgroup_storage *storage, *stmp; + LIST_HEAD(st_links); unsigned int atype; cgroup_lock(); + list_splice_init(&cgrp->bpf.struct_ops_links, &st_links); + list_for_each_entry_safe(st_link, st_tmp, &st_links, list) { + st_link->cgroup = NULL; + st_link->cgroup_removed = true; + cgroup_put(cgrp); + if (IS_ERR(bpf_link_inc_not_zero(&st_link->link))) + list_del(&st_link->list); + } + for (atype = 0; atype < ARRAY_SIZE(cgrp->bpf.progs); atype++) { struct hlist_head *progs = &cgrp->bpf.progs[atype]; struct bpf_prog_list *pl; @@ -346,6 +358,11 @@ static void cgroup_bpf_release(struct work_struct *work) cgroup_unlock(); + list_for_each_entry_safe(st_link, st_tmp, &st_links, list) { + st_link->link.ops->detach(&st_link->link); + bpf_link_put(&st_link->link); + } + for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) cgroup_bpf_put(p); @@ -525,6 +542,7 @@ static int cgroup_bpf_inherit(struct cgroup *cgrp) INIT_HLIST_HEAD(&cgrp->bpf.progs[i]); INIT_LIST_HEAD(&cgrp->bpf.storages); + INIT_LIST_HEAD(&cgrp->bpf.struct_ops_links); for (i = 0; i < NR; i++) if (compute_effective_progs(cgrp, i, &arrays[i])) @@ -2759,3 +2777,31 @@ cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return NULL; } } + +int cgroup_bpf_attach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ + int ret = 0; + + cgroup_lock(); + if (percpu_ref_is_zero(&cgrp->bpf.refcnt)) { + ret = -EBUSY; + goto out; + } + list_add_tail(&link->list, &cgrp->bpf.struct_ops_links); +out: + cgroup_unlock(); + return ret; +} + +void cgroup_bpf_detach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ + cgroup_lock(); + if (link->cgroup == cgrp) { + list_del(&link->list); + link->cgroup = NULL; + cgroup_put(cgrp); + } + cgroup_unlock(); +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 3ca7d76e05f0..d5492e60744a 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1237,6 +1237,7 @@ enum bpf_perf_event_type { #define BPF_F_AFTER (1U << 4) #define BPF_F_ID (1U << 5) #define BPF_F_PREORDER (1U << 6) +#define BPF_F_CGROUP_FD (1U << 7) #define BPF_F_LINK BPF_F_LINK /* 1 << 13 */ /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:05 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
bpf_map__attach_struct_ops() returns -EINVAL instead of -ENOMEM on the memory allocation failure. Fix it. Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support") Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- tools/lib/bpf/libbpf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 0c8bf0b5cce4..46d2762f5993 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -13480,7 +13480,7 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) link = calloc(1, sizeof(*link)); if (!link) - return libbpf_err_ptr(-EINVAL); + return libbpf_err_ptr(-ENOMEM); /* kern_vdata should be prepared during the loading phase. */ err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0); -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:06 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce bpf_map__attach_struct_ops_opts(), an extended version of bpf_map__attach_struct_ops(), which takes additional struct bpf_struct_ops_opts argument. This allows to pass a target_fd argument and the BPF_F_CGROUP_FD flag and attach the struct ops to a cgroup as a result. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- tools/lib/bpf/libbpf.c | 20 +++++++++++++++++--- tools/lib/bpf/libbpf.h | 14 ++++++++++++++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 32 insertions(+), 3 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 46d2762f5993..9ba67089bf9d 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -13462,11 +13462,18 @@ static int bpf_link__detach_struct_ops(struct bpf_link *link) return close(link->fd); } -struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) +struct bpf_link *bpf_map__attach_struct_ops_opts(const struct bpf_map *map, + const struct bpf_struct_ops_opts *opts) { + DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_opts); struct bpf_link_struct_ops *link; + int err, fd, target_fd; __u32 zero = 0; - int err, fd; + + if (!OPTS_VALID(opts, bpf_struct_ops_opts)) { + pr_warn("map '%s': invalid opts\n", map->name); + return libbpf_err_ptr(-EINVAL); + } if (!bpf_map__is_struct_ops(map)) { pr_warn("map '%s': can't attach non-struct_ops map\n", map->name); @@ -13503,7 +13510,9 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) return &link->link; } - fd = bpf_link_create(map->fd, 0, BPF_STRUCT_OPS, NULL); + link_opts.flags = OPTS_GET(opts, flags, 0); + target_fd = OPTS_GET(opts, target_fd, 0); + fd = bpf_link_create(map->fd, target_fd, BPF_STRUCT_OPS, &link_opts); if (fd < 0) { free(link); return libbpf_err_ptr(fd); @@ -13515,6 +13524,11 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) return &link->link; } +struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) +{ + return bpf_map__attach_struct_ops_opts(map, NULL); +} + /* * Swap the back struct_ops of a link with a new struct_ops map. */ diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index dfc37a615578..2c28cf80e7fe 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -920,6 +920,20 @@ bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd, struct bpf_map; LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map); + +struct bpf_struct_ops_opts { + /* size of this struct, for forward/backward compatibility */ + size_t sz; + __u32 flags; + __u32 target_fd; + __u64 expected_revision; + size_t :0; +}; +#define bpf_struct_ops_opts__last_field expected_revision + +LIBBPF_API struct bpf_link * +bpf_map__attach_struct_ops_opts(const struct bpf_map *map, + const struct bpf_struct_ops_opts *opts); LIBBPF_API int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map); struct bpf_iter_attach_opts { diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index d18fbcea7578..4779190c97b6 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -454,4 +454,5 @@ LIBBPF_1.7.0 { bpf_prog_assoc_struct_ops; bpf_program__assoc_struct_ops; btf__permute; + bpf_map__attach_struct_ops_opts; } LIBBPF_1.6.0; -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:07 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Struct oom_control is used to describe the OOM context. It's memcg field defines the scope of OOM: it's NULL for global OOMs and a valid memcg pointer for memcg-scoped OOMs. Teach bpf verifier to recognize it as trusted or NULL pointer. It will provide the bpf OOM handler a trusted memcg pointer, which for example is required for iterating the memcg's subtree. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> --- kernel/bpf/verifier.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c2f2650db9fd..cca36edb460d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7242,6 +7242,10 @@ BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct vm_area_struct) { struct file *vm_file; }; +BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct oom_control) { + struct mem_cgroup *memcg; +}; + static bool type_is_rcu(struct bpf_verifier_env *env, struct bpf_reg_state *reg, const char *field_name, u32 btf_id) @@ -7284,6 +7288,7 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env, BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct socket)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct dentry)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct vm_area_struct)); + BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct oom_control)); return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_trusted_or_null"); -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:08 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
mem_cgroup_get_from_ino() can be reused by the BPF OOM implementation, but currently depends on CONFIG_SHRINKER_DEBUG. Remove this dependency. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> --- include/linux/memcontrol.h | 4 ++-- mm/memcontrol.c | 2 -- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 229ac9835adb..f3b8c71870d8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -833,9 +833,9 @@ static inline unsigned long mem_cgroup_ino(struct mem_cgroup *memcg) { return memcg ? cgroup_ino(memcg->css.cgroup) : 0; } +#endif struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino); -#endif static inline struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m) { @@ -1298,12 +1298,12 @@ static inline unsigned long mem_cgroup_ino(struct mem_cgroup *memcg) { return 0; } +#endif static inline struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino) { return NULL; } -#endif static inline struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3808845bc8cc..1f74fce27677 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3658,7 +3658,6 @@ struct mem_cgroup *mem_cgroup_from_id(unsigned short id) return xa_load(&mem_cgroup_ids, id); } -#ifdef CONFIG_SHRINKER_DEBUG struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino) { struct cgroup *cgrp; @@ -3679,7 +3678,6 @@ struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino) return memcg; } -#endif static void free_mem_cgroup_per_node_info(struct mem_cgroup_per_node *pn) { -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:09 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce a bpf struct ops for implementing custom OOM handling policies. It's possible to load one bpf_oom_ops for the system and one bpf_oom_ops for every memory cgroup. In case of a memcg OOM, the cgroup tree is traversed from the OOM'ing memcg up to the root and corresponding BPF OOM handlers are executed until some memory is freed. If no memory is freed, the kernel OOM killer is invoked. The struct ops provides the bpf_handle_out_of_memory() callback, which expected to return 1 if it was able to free some memory and 0 otherwise. If 1 is returned, the kernel also checks the bpf_memory_freed field of the oom_control structure, which is expected to be set by kfuncs suitable for releasing memory (which will be introduced later in the patch series). If both are set, OOM is considered handled, otherwise the next OOM handler in the chain is executed: e.g. BPF OOM attached to the parent cgroup or the kernel OOM killer. The bpf_handle_out_of_memory() callback program is sleepable to allow using iterators, e.g. cgroup iterators. The callback receives struct oom_control as an argument, so it can determine the scope of the OOM event: if this is a memcg-wide or system-wide OOM. It also receives bpf_struct_ops_link as the second argument, so it can detect the cgroup level at which this specific instance is attached. The bpf_handle_out_of_memory() callback is executed just before the kernel victim task selection algorithm, so all heuristics and sysctls like panic on oom, sysctl_oom_kill_allocating_task and sysctl_oom_kill_allocating_task are respected. The struct ops has the name field, which allows to define a custom name for the implemented policy. It's printed in the OOM report in the oom_handler=<name> format only if a bpf handler is invoked. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 3 + include/linux/bpf.h | 1 + include/linux/bpf_oom.h | 46 ++++++++ include/linux/oom.h | 8 ++ kernel/bpf/bpf_struct_ops.c | 12 +- mm/Makefile | 2 +- mm/bpf_oom.c | 192 ++++++++++++++++++++++++++++++++ mm/oom_kill.c | 19 ++++ 9 files changed, 282 insertions(+), 3 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 mm/bpf_oom.c diff --git a/MAINTAINERS b/MAINTAINERS index 491d567f7dc8..53465570c1e5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4807,7 +4807,9 @@ M: Shakeel Butt <shakeel.butt@linux.dev> L: bpf@vger.kernel.org L: linux-mm@kvack.org S: Maintained +F: include/linux/bpf_oom.h F: mm/bpf_memcontrol.c +F: mm/bpf_oom.c BPF [MISC] L: bpf@vger.kernel.org diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h index 6c5e37190dad..52395834ce13 100644 --- a/include/linux/bpf-cgroup-defs.h +++ b/include/linux/bpf-cgroup-defs.h @@ -74,6 +74,9 @@ struct cgroup_bpf { /* list of bpf struct ops links */ struct list_head struct_ops_links; + /* BPF OOM struct ops link */ + struct bpf_struct_ops_link __rcu *bpf_oom_link; + /* reference counter used to detach bpf programs after cgroup removal */ struct percpu_ref refcnt; diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 391888eb257c..a5cee5a657b0 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -3944,6 +3944,7 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog) int bpf_prog_get_file_line(struct bpf_prog *prog, unsigned long ip, const char **filep, const char **linep, int *nump); struct bpf_prog *bpf_prog_find_from_stack(void); +void *bpf_struct_ops_data(struct bpf_map *map); int bpf_insn_array_init(struct bpf_map *map, const struct bpf_prog *prog); int bpf_insn_array_ready(struct bpf_map *map); diff --git a/include/linux/bpf_oom.h b/include/linux/bpf_oom.h new file mode 100644 index 000000000000..c81133145c50 --- /dev/null +++ b/include/linux/bpf_oom.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ + +#ifndef __BPF_OOM_H +#define __BPF_OOM_H + +struct oom_control; + +#define BPF_OOM_NAME_MAX_LEN 64 + +struct bpf_oom_ops { + /** + * @handle_out_of_memory: Out of memory bpf handler, called before + * the in-kernel OOM killer. + * @oc: OOM control structure + * @st_link: struct ops link + * + * Should return 1 if some memory was freed up, otherwise + * the in-kernel OOM killer is invoked. + */ + int (*handle_out_of_memory)(struct oom_control *oc, + struct bpf_struct_ops_link *st_link); + + /** + * @name: BPF OOM policy name + */ + char name[BPF_OOM_NAME_MAX_LEN]; +}; + +#ifdef CONFIG_BPF_SYSCALL +/** + * @bpf_handle_oom: handle out of memory condition using bpf + * @oc: OOM control structure + * + * Returns true if some memory was freed. + */ +bool bpf_handle_oom(struct oom_control *oc); + +#else /* CONFIG_BPF_SYSCALL */ +static inline bool bpf_handle_oom(struct oom_control *oc) +{ + return false; +} + +#endif /* CONFIG_BPF_SYSCALL */ + +#endif /* __BPF_OOM_H */ diff --git a/include/linux/oom.h b/include/linux/oom.h index 7b02bc1d0a7e..c2dce336bcb4 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -51,6 +51,14 @@ struct oom_control { /* Used to print the constraint info. */ enum oom_constraint constraint; + +#ifdef CONFIG_BPF_SYSCALL + /* Used by the bpf oom implementation to mark the forward progress */ + bool bpf_memory_freed; + + /* Handler name */ + const char *bpf_handler_name; +#endif }; extern struct mutex oom_lock; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 2e361e22cfa0..6285a6d56b98 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -1009,7 +1009,7 @@ static void bpf_struct_ops_map_free(struct bpf_map *map) * in the tramopline image to finish before releasing * the trampoline image. */ - synchronize_rcu_mult(call_rcu, call_rcu_tasks); + synchronize_rcu_mult(call_rcu, call_rcu_tasks, call_rcu_tasks_trace); __bpf_struct_ops_map_free(map); } @@ -1226,7 +1226,8 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link) if (st_link->cgroup) cgroup_bpf_detach_struct_ops(st_link->cgroup, st_link); - kfree(st_link); + synchronize_rcu_tasks_trace(); + kfree_rcu(st_link, link.rcu); } static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, @@ -1535,3 +1536,10 @@ void bpf_map_struct_ops_info_fill(struct bpf_map_info *info, struct bpf_map *map info->btf_vmlinux_id = btf_obj_id(st_map->btf); } + +void *bpf_struct_ops_data(struct bpf_map *map) +{ + struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; + + return &st_map->kvalue.data; +} diff --git a/mm/Makefile b/mm/Makefile index bf46fe31dc14..e939525ba01b 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -107,7 +107,7 @@ ifdef CONFIG_SWAP obj-$(CONFIG_MEMCG) += swap_cgroup.o endif ifdef CONFIG_BPF_SYSCALL -obj-$(CONFIG_MEMCG) += bpf_memcontrol.o +obj-$(CONFIG_MEMCG) += bpf_memcontrol.o bpf_oom.o endif obj-$(CONFIG_CGROUP_HUGETLB) += hugetlb_cgroup.o obj-$(CONFIG_GUP_TEST) += gup_test.o diff --git a/mm/bpf_oom.c b/mm/bpf_oom.c new file mode 100644 index 000000000000..ea70be6e2c26 --- /dev/null +++ b/mm/bpf_oom.c @@ -0,0 +1,192 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * BPF-driven OOM killer customization + * + * Author: Roman Gushchin <roman.gushchin@linux.dev> + */ + +#include <linux/bpf.h> +#include <linux/oom.h> +#include <linux/bpf_oom.h> +#include <linux/bpf-cgroup.h> +#include <linux/cgroup.h> +#include <linux/memcontrol.h> +#include <linux/uaccess.h> + +static int bpf_ops_handle_oom(struct bpf_oom_ops *bpf_oom_ops, + struct bpf_struct_ops_link *st_link, + struct oom_control *oc) +{ + int ret; + + oc->bpf_handler_name = &bpf_oom_ops->name[0]; + oc->bpf_memory_freed = false; + pagefault_disable(); + ret = bpf_oom_ops->handle_out_of_memory(oc, st_link); + pagefault_enable(); + oc->bpf_handler_name = NULL; + + return ret; +} + +bool bpf_handle_oom(struct oom_control *oc) +{ + struct bpf_struct_ops_link *st_link; + struct bpf_oom_ops *bpf_oom_ops; + struct mem_cgroup *memcg; + struct bpf_map *map; + int ret = 0; + + /* + * System-wide OOMs are handled by the struct ops attached + * to the root memory cgroup + */ + memcg = oc->memcg ? oc->memcg : root_mem_cgroup; + + rcu_read_lock_trace(); + + /* Find the nearest bpf_oom_ops traversing the cgroup tree upwards */ + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + st_link = rcu_dereference_check(memcg->css.cgroup->bpf.bpf_oom_link, + rcu_read_lock_trace_held()); + if (!st_link) + continue; + + map = rcu_dereference_check((st_link->map), + rcu_read_lock_trace_held()); + if (!map) + continue; + + /* Call BPF OOM handler */ + bpf_oom_ops = bpf_struct_ops_data(map); + ret = bpf_ops_handle_oom(bpf_oom_ops, st_link, oc); + if (ret && oc->bpf_memory_freed) + break; + ret = 0; + } + + rcu_read_unlock_trace(); + + return ret && oc->bpf_memory_freed; +} + +static int __handle_out_of_memory(struct oom_control *oc, + struct bpf_struct_ops_link *st_link) +{ + return 0; +} + +static struct bpf_oom_ops __bpf_oom_ops = { + .handle_out_of_memory = __handle_out_of_memory, +}; + +static const struct bpf_func_proto * +bpf_oom_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) +{ + return tracing_prog_func_proto(func_id, prog); +} + +static bool bpf_oom_ops_is_valid_access(int off, int size, + enum bpf_access_type type, + const struct bpf_prog *prog, + struct bpf_insn_access_aux *info) +{ + return bpf_tracing_btf_ctx_access(off, size, type, prog, info); +} + +static const struct bpf_verifier_ops bpf_oom_verifier_ops = { + .get_func_proto = bpf_oom_func_proto, + .is_valid_access = bpf_oom_ops_is_valid_access, +}; + +static int bpf_oom_ops_reg(void *kdata, struct bpf_link *link) +{ + struct bpf_struct_ops_link *st_link = (struct bpf_struct_ops_link *)link; + struct cgroup *cgrp; + + /* The link is not yet fully initialized, but cgroup should be set */ + if (!link) + return -EOPNOTSUPP; + + cgrp = st_link->cgroup; + if (!cgrp) + return -EINVAL; + + if (cmpxchg(&cgrp->bpf.bpf_oom_link, NULL, st_link)) + return -EEXIST; + + return 0; +} + +static void bpf_oom_ops_unreg(void *kdata, struct bpf_link *link) +{ + struct bpf_struct_ops_link *st_link = (struct bpf_struct_ops_link *)link; + struct cgroup *cgrp; + + if (!link) + return; + + cgrp = st_link->cgroup; + if (!cgrp) + return; + + WARN_ON(cmpxchg(&cgrp->bpf.bpf_oom_link, st_link, NULL) != st_link); +} + +static int bpf_oom_ops_check_member(const struct btf_type *t, + const struct btf_member *member, + const struct bpf_prog *prog) +{ + u32 moff = __btf_member_bit_offset(t, member) / 8; + + switch (moff) { + case offsetof(struct bpf_oom_ops, handle_out_of_memory): + if (!prog) + return -EINVAL; + break; + } + + return 0; +} + +static int bpf_oom_ops_init_member(const struct btf_type *t, + const struct btf_member *member, + void *kdata, const void *udata) +{ + const struct bpf_oom_ops *uops = udata; + struct bpf_oom_ops *ops = kdata; + u32 moff = __btf_member_bit_offset(t, member) / 8; + + switch (moff) { + case offsetof(struct bpf_oom_ops, name): + if (uops->name[0]) + strscpy_pad(ops->name, uops->name, sizeof(ops->name)); + else + strscpy_pad(ops->name, "bpf_defined_policy"); + return 1; + } + return 0; +} + +static int bpf_oom_ops_init(struct btf *btf) +{ + return 0; +} + +static struct bpf_struct_ops bpf_oom_bpf_ops = { + .verifier_ops = &bpf_oom_verifier_ops, + .reg = bpf_oom_ops_reg, + .unreg = bpf_oom_ops_unreg, + .check_member = bpf_oom_ops_check_member, + .init_member = bpf_oom_ops_init_member, + .init = bpf_oom_ops_init, + .name = "bpf_oom_ops", + .owner = THIS_MODULE, + .cfi_stubs = &__bpf_oom_ops +}; + +static int __init bpf_oom_struct_ops_init(void) +{ + return register_bpf_struct_ops(&bpf_oom_bpf_ops, bpf_oom_ops); +} +late_initcall(bpf_oom_struct_ops_init); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 5eb11fbba704..44bbcf033804 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -45,6 +45,7 @@ #include <linux/mmu_notifier.h> #include <linux/cred.h> #include <linux/nmi.h> +#include <linux/bpf_oom.h> #include <asm/tlb.h> #include "internal.h" @@ -246,6 +247,15 @@ static const char * const oom_constraint_text[] = { [CONSTRAINT_MEMCG] = "CONSTRAINT_MEMCG", }; +static const char *oom_handler_name(struct oom_control *oc) +{ +#ifdef CONFIG_BPF_SYSCALL + if (oc->bpf_handler_name) + return oc->bpf_handler_name; +#endif + return NULL; +} + /* * Determine the type of allocation constraint. */ @@ -461,6 +471,8 @@ static void dump_header(struct oom_control *oc) pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), order=%d, oom_score_adj=%hd\n", current->comm, oc->gfp_mask, &oc->gfp_mask, oc->order, current->signal->oom_score_adj); + if (oom_handler_name(oc)) + pr_warn("oom bpf handler: %s\n", oom_handler_name(oc)); if (!IS_ENABLED(CONFIG_COMPACTION) && oc->order) pr_warn("COMPACTION is disabled!!!\n"); @@ -1168,6 +1180,13 @@ bool out_of_memory(struct oom_control *oc) return true; } + /* + * Let bpf handle the OOM first. If it was able to free up some memory, + * bail out. Otherwise fall back to the kernel OOM killer. + */ + if (bpf_handle_oom(oc)) + return true; + select_bad_process(oc); /* Found nothing?!?! */ if (!oc->chosen) { -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:10 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce bpf_oom_kill_process() bpf kfunc, which is supposed to be used by BPF OOM programs. It allows to kill a process in exactly the same way the OOM killer does: using the OOM reaper, bumping corresponding memcg and global statistics, respecting memory.oom.group etc. On success, it sets the oom_control's bpf_memory_freed field to true, enabling the bpf program to bypass the kernel OOM killer. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- mm/oom_kill.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 44bbcf033804..09897597907f 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -46,6 +46,7 @@ #include <linux/cred.h> #include <linux/nmi.h> #include <linux/bpf_oom.h> +#include <linux/btf.h> #include <asm/tlb.h> #include "internal.h" @@ -1290,3 +1291,82 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags) return -ENOSYS; #endif /* CONFIG_MMU */ } + +#ifdef CONFIG_BPF_SYSCALL + +__bpf_kfunc_start_defs(); +/** + * bpf_oom_kill_process - Kill a process as OOM killer + * @oc: pointer to oom_control structure, describes OOM context + * @task: task to be killed + * @message__str: message to print in dmesg + * + * Kill a process in a way similar to the kernel OOM killer. + * This means dump the necessary information to dmesg, adjust memcg + * statistics, leverage the oom reaper, respect memory.oom.group etc. + * + * bpf_oom_kill_process() marks the forward progress by setting + * oc->bpf_memory_freed. If the progress was made, the bpf program + * is free to decide if the kernel oom killer should be invoked. + * Otherwise it's enforced, so that a bad bpf program can't + * deadlock the machine on memory. + */ +__bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc, + struct task_struct *task, + const char *message__str) +{ + if (oom_unkillable_task(task)) + return -EPERM; + + if (task->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) + return -EINVAL; + + /* paired with put_task_struct() in oom_kill_process() */ + get_task_struct(task); + + oc->chosen = task; + + oom_kill_process(oc, message__str); + + oc->chosen = NULL; + oc->bpf_memory_freed = true; + + return 0; +} + +__bpf_kfunc_end_defs(); + +BTF_KFUNCS_START(bpf_oom_kfuncs) +BTF_ID_FLAGS(func, bpf_oom_kill_process, KF_SLEEPABLE) +BTF_KFUNCS_END(bpf_oom_kfuncs) + +BTF_ID_LIST_SINGLE(bpf_oom_ops_ids, struct, bpf_oom_ops) + +static int bpf_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id) +{ + if (prog->type != BPF_PROG_TYPE_STRUCT_OPS || + prog->aux->attach_btf_id != bpf_oom_ops_ids[0]) + return -EACCES; + return 0; +} + +static const struct btf_kfunc_id_set bpf_oom_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_oom_kfuncs, + .filter = bpf_oom_kfunc_filter, +}; + +static int __init bpf_oom_init(void) +{ + int err; + + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, + &bpf_oom_kfunc_set); + if (err) + pr_warn("error while registering bpf oom kfuncs: %d", err); + + return err; +} +late_initcall(bpf_oom_init); + +#endif -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:11 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce bpf_out_of_memory() bpf kfunc, which allows to declare an out of memory events and trigger the corresponding kernel OOM handling mechanism. It takes a trusted memcg pointer (or NULL for system-wide OOMs) as an argument, as well as the page order. If the BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK flag is not set, only one OOM can be declared and handled in the system at once, so if the function is called in parallel to another OOM handling, it bails out with -EBUSY. This mode is suited for global OOM's: any concurrent OOMs will likely do the job and release some memory. In a blocking mode (which is suited for memcg OOMs) the execution will wait on the oom_lock mutex. The function is declared as sleepable. It guarantees that it won't be called from an atomic context. It's required by the OOM handling code, which shouldn't be called from a non-blocking context. Handling of a memcg OOM almost always requires taking of the css_set_lock spinlock. The fact that bpf_out_of_memory() is sleepable also guarantees that it can't be called with acquired css_set_lock, so the kernel can't deadlock on it. To avoid deadlocks on the oom lock, the function is filtered out for bpf oom struct ops programs and all tracing programs. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/oom.h | 5 +++ mm/oom_kill.c | 85 +++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 88 insertions(+), 2 deletions(-) diff --git a/include/linux/oom.h b/include/linux/oom.h index c2dce336bcb4..851dba9287b5 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -21,6 +21,11 @@ enum oom_constraint { CONSTRAINT_MEMCG, }; +enum bpf_oom_flags { + BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK = 1 << 0, + BPF_OOM_FLAGS_LAST = 1 << 1, +}; + /* * Details of the page allocation that triggered the oom killer that are used to * determine what should be killed. diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 09897597907f..8f63a370b8f5 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1334,6 +1334,53 @@ __bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc, return 0; } +/** + * bpf_out_of_memory - declare Out Of Memory state and invoke OOM killer + * @memcg__nullable: memcg or NULL for system-wide OOMs + * @order: order of page which wasn't allocated + * @flags: flags + * + * Declares the Out Of Memory state and invokes the OOM killer. + * + * OOM handlers are synchronized using the oom_lock mutex. If wait_on_oom_lock + * is true, the function will wait on it. Otherwise it bails out with -EBUSY + * if oom_lock is contended. + * + * Generally it's advised to pass wait_on_oom_lock=false for global OOMs + * and wait_on_oom_lock=true for memcg-scoped OOMs. + * + * Returns 1 if the forward progress was achieved and some memory was freed. + * Returns a negative value if an error occurred. + */ +__bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable, + int order, u64 flags) +{ + struct oom_control oc = { + .memcg = memcg__nullable, + .gfp_mask = GFP_KERNEL, + .order = order, + }; + int ret; + + if (flags & ~(BPF_OOM_FLAGS_LAST - 1)) + return -EINVAL; + + if (oc.order < 0 || oc.order > MAX_PAGE_ORDER) + return -EINVAL; + + if (flags & BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK) { + ret = mutex_lock_killable(&oom_lock); + if (ret) + return ret; + } else if (!mutex_trylock(&oom_lock)) + return -EBUSY; + + ret = out_of_memory(&oc); + + mutex_unlock(&oom_lock); + return ret; +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(bpf_oom_kfuncs) @@ -1356,14 +1403,48 @@ static const struct btf_kfunc_id_set bpf_oom_kfunc_set = { .filter = bpf_oom_kfunc_filter, }; +BTF_KFUNCS_START(bpf_declare_oom_kfuncs) +BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE) +BTF_KFUNCS_END(bpf_declare_oom_kfuncs) + +static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id) +{ + if (!btf_id_set8_contains(&bpf_declare_oom_kfuncs, kfunc_id)) + return 0; + + if (prog->type == BPF_PROG_TYPE_STRUCT_OPS && + prog->aux->attach_btf_id == bpf_oom_ops_ids[0]) + return -EACCES; + + if (prog->type == BPF_PROG_TYPE_TRACING) + return -EACCES; + + return 0; +} + +static const struct btf_kfunc_id_set bpf_declare_oom_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_declare_oom_kfuncs, + .filter = bpf_declare_oom_kfunc_filter, +}; + static int __init bpf_oom_init(void) { int err; err = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_oom_kfunc_set); - if (err) - pr_warn("error while registering bpf oom kfuncs: %d", err); + if (err) { + pr_warn("error while registering struct_ops bpf oom kfuncs: %d", err); + return err; + } + + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, + &bpf_declare_oom_kfunc_set); + if (err) { + pr_warn("error while registering unspec bpf oom kfuncs: %d", err); + return err; + } return err; } -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:12 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Export tsk_is_oom_victim() helper as a BPF kfunc. It's very useful to avoid redundant oom kills. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Suggested-by: Michal Hocko <mhocko@suse.com> --- mm/oom_kill.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 8f63a370b8f5..53f9f9674658 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1381,10 +1381,24 @@ __bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable, return ret; } +/** + * bpf_task_is_oom_victim - Check if the task has been marked as an OOM victim + * @task: task to check + * + * Returns true if the task has been previously selected by the OOM killer + * to be killed. It's expected that the task will be destroyed soon and some + * memory will be freed, so maybe no additional actions required. + */ +__bpf_kfunc bool bpf_task_is_oom_victim(struct task_struct *task) +{ + return tsk_is_oom_victim(task); +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(bpf_oom_kfuncs) BTF_ID_FLAGS(func, bpf_oom_kill_process, KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_task_is_oom_victim) BTF_KFUNCS_END(bpf_oom_kfuncs) BTF_ID_LIST_SINGLE(bpf_oom_ops_ids, struct, bpf_oom_ops) -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:13 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Implement read_cgroup_file() helper to read from cgroup control files, e.g. statistics. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- tools/testing/selftests/bpf/cgroup_helpers.c | 45 ++++++++++++++++++++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 ++ 2 files changed, 48 insertions(+) diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c index 20cede4db3ce..fc5f22409ce5 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.c +++ b/tools/testing/selftests/bpf/cgroup_helpers.c @@ -126,6 +126,51 @@ int enable_controllers(const char *relative_path, const char *controllers) return __enable_controllers(cgroup_path, controllers); } +static ssize_t __read_cgroup_file(const char *cgroup_path, const char *file, + char *buf, size_t size) +{ + char file_path[PATH_MAX + 1]; + ssize_t ret; + int fd; + + snprintf(file_path, sizeof(file_path), "%s/%s", cgroup_path, file); + fd = open(file_path, O_RDONLY); + if (fd < 0) { + log_err("Opening %s", file_path); + return -1; + } + + ret = read(fd, buf, size); + if (ret < 0) { + close(fd); + log_err("Reading %s", file_path); + return -1; + } + + close(fd); + return ret; +} + +/** + * read_cgroup_file() - Read from a cgroup file + * @relative_path: The cgroup path, relative to the workdir + * @file: The name of the file in cgroupfs to read from + * @buf: Buffer to read from the file + * @size: Size of the buffer + * + * Read from a file in the given cgroup's directory. + * + * If successful, the number of read bytes is returned. + */ +ssize_t read_cgroup_file(const char *relative_path, const char *file, + char *buf, size_t size) +{ + char cgroup_path[PATH_MAX - 24]; + + format_cgroup_path(cgroup_path, relative_path); + return __read_cgroup_file(cgroup_path, file, buf, size); +} + static int __write_cgroup_file(const char *cgroup_path, const char *file, const char *buf) { diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h index 3857304be874..66a08b64838b 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.h +++ b/tools/testing/selftests/bpf/cgroup_helpers.h @@ -4,6 +4,7 @@ #include <errno.h> #include <string.h> +#include <sys/types.h> #define clean_errno() (errno == 0 ? "None" : strerror(errno)) #define log_err(MSG, ...) fprintf(stderr, "(%s:%d: errno: %s) " MSG "\n", \ @@ -11,6 +12,8 @@ /* cgroupv2 related */ int enable_controllers(const char *relative_path, const char *controllers); +ssize_t read_cgroup_file(const char *relative_path, const char *file, + char *buf, size_t size); int write_cgroup_file(const char *relative_path, const char *file, const char *buf); int write_cgroup_file_parent(const char *relative_path, const char *file, -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:14 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Implement a kselftest for the OOM handling functionality. The OOM handling policy which is implemented in BPF is to kill all tasks belonging to the biggest leaf cgroup, which doesn't contain unkillable tasks (tasks with oom_score_adj set to -1000). Pagecache size is excluded from the accounting. The test creates a hierarchy of memory cgroups, causes an OOM at the top level, checks that the expected process is killed and verifies the memcg's oom statistics. The same BPF OOM policy is attached to a memory cgroup and system-wide. In the first case the program does nothing and returns false, so it's executed the second time, when it properly handles the OOM. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ 2 files changed, 367 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c diff --git a/tools/testing/selftests/bpf/prog_tests/test_oom.c b/tools/testing/selftests/bpf/prog_tests/test_oom.c new file mode 100644 index 000000000000..a1eadbe1ae83 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/test_oom.c @@ -0,0 +1,256 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include <test_progs.h> +#include <bpf/btf.h> +#include <bpf/bpf.h> + +#include "cgroup_helpers.h" +#include "test_oom.skel.h" + +struct cgroup_desc { + const char *path; + int fd; + unsigned long long id; + int pid; + size_t target; + size_t max; + int oom_score_adj; + bool victim; +}; + +#define MB (1024 * 1024) +#define OOM_SCORE_ADJ_MIN (-1000) +#define OOM_SCORE_ADJ_MAX 1000 + +static struct cgroup_desc cgroups[] = { + { .path = "/oom_test", .max = 80 * MB}, + { .path = "/oom_test/cg1", .target = 10 * MB, + .oom_score_adj = OOM_SCORE_ADJ_MAX }, + { .path = "/oom_test/cg2", .target = 40 * MB, + .oom_score_adj = OOM_SCORE_ADJ_MIN }, + { .path = "/oom_test/cg3" }, + { .path = "/oom_test/cg3/cg4", .target = 30 * MB, + .victim = true }, + { .path = "/oom_test/cg3/cg5", .target = 20 * MB }, +}; + +static int spawn_task(struct cgroup_desc *desc) +{ + char *ptr; + int pid; + + pid = fork(); + if (pid < 0) + return pid; + + if (pid > 0) { + /* parent */ + desc->pid = pid; + return 0; + } + + /* child */ + if (desc->oom_score_adj) { + char buf[64]; + int fd = open("/proc/self/oom_score_adj", O_WRONLY); + + if (fd < 0) + return -1; + + snprintf(buf, sizeof(buf), "%d", desc->oom_score_adj); + write(fd, buf, sizeof(buf)); + close(fd); + } + + ptr = (char *)malloc(desc->target); + if (!ptr) + return -ENOMEM; + + memset(ptr, 'a', desc->target); + + while (1) + sleep(1000); + + return 0; +} + +static void setup_environment(void) +{ + int i, err; + + err = setup_cgroup_environment(); + if (!ASSERT_OK(err, "setup_cgroup_environment")) + goto cleanup; + + for (i = 0; i < ARRAY_SIZE(cgroups); i++) { + cgroups[i].fd = create_and_get_cgroup(cgroups[i].path); + if (!ASSERT_GE(cgroups[i].fd, 0, "create_and_get_cgroup")) + goto cleanup; + + cgroups[i].id = get_cgroup_id(cgroups[i].path); + if (!ASSERT_GT(cgroups[i].id, 0, "get_cgroup_id")) + goto cleanup; + + /* Freeze the top-level cgroup */ + if (i == 0) { + /* Freeze the top-level cgroup */ + err = write_cgroup_file(cgroups[i].path, "cgroup.freeze", "1"); + if (!ASSERT_OK(err, "freeze cgroup")) + goto cleanup; + } + + /* Recursively enable the memory controller */ + if (!cgroups[i].target) { + + err = write_cgroup_file(cgroups[i].path, "cgroup.subtree_control", + "+memory"); + if (!ASSERT_OK(err, "enable memory controller")) + goto cleanup; + } + + /* Set memory.max */ + if (cgroups[i].max) { + char buf[256]; + + snprintf(buf, sizeof(buf), "%lu", cgroups[i].max); + err = write_cgroup_file(cgroups[i].path, "memory.max", buf); + if (!ASSERT_OK(err, "set memory.max")) + goto cleanup; + + snprintf(buf, sizeof(buf), "0"); + write_cgroup_file(cgroups[i].path, "memory.swap.max", buf); + + } + + /* Spawn tasks creating memory pressure */ + if (cgroups[i].target) { + char buf[256]; + + err = spawn_task(&cgroups[i]); + if (!ASSERT_OK(err, "spawn task")) + goto cleanup; + + snprintf(buf, sizeof(buf), "%d", cgroups[i].pid); + err = write_cgroup_file(cgroups[i].path, "cgroup.procs", buf); + if (!ASSERT_OK(err, "put child into a cgroup")) + goto cleanup; + } + } + + return; + +cleanup: + cleanup_cgroup_environment(); + + // TODO return an error? +} + +static int run_and_wait_for_oom(void) +{ + int ret = -1; + bool first = true; + char buf[4096] = {}; + size_t size; + + /* Unfreeze the top-level cgroup */ + ret = write_cgroup_file(cgroups[0].path, "cgroup.freeze", "0"); + if (!ASSERT_OK(ret, "freeze cgroup")) + return -1; + + for (;;) { + int i, status; + pid_t pid = wait(&status); + + if (pid == -1) { + if (errno == EINTR) + continue; + /* ECHILD */ + break; + } + + if (!first) + continue; + + first = false; + + /* Check which process was terminated first */ + for (i = 0; i < ARRAY_SIZE(cgroups); i++) { + if (!ASSERT_OK(cgroups[i].victim != + (pid == cgroups[i].pid), + "correct process was killed")) { + ret = -1; + break; + } + + if (!cgroups[i].victim) + continue; + + /* Check the memcg oom counter */ + size = read_cgroup_file(cgroups[i].path, + "memory.events", + buf, sizeof(buf)); + if (!ASSERT_OK(size <= 0, "read memory.events")) { + ret = -1; + break; + } + + if (!ASSERT_OK(strstr(buf, "oom_kill 1") == NULL, + "oom_kill count check")) { + ret = -1; + break; + } + } + + /* Kill all remaining tasks */ + for (i = 0; i < ARRAY_SIZE(cgroups); i++) + if (cgroups[i].pid && cgroups[i].pid != pid) + kill(cgroups[i].pid, SIGKILL); + } + + return ret; +} + +void test_oom(void) +{ + DECLARE_LIBBPF_OPTS(bpf_struct_ops_opts, opts); + struct bpf_link *link1 = NULL, *link2 = NULL; + struct test_oom *skel; + int err = 0; + + setup_environment(); + + skel = test_oom__open_and_load(); + if (!skel) { + err = -errno; + CHECK_FAIL(err); + goto cleanup; + } + + opts.flags = BPF_F_CGROUP_FD; + opts.target_fd = cgroups[0].fd; + link1 = bpf_map__attach_struct_ops_opts(skel->maps.test_bpf_oom, &opts); + if (!link1) { + err = -errno; + CHECK_FAIL(err); + goto cleanup; + } + + opts.target_fd = get_root_cgroup(); + link2 = bpf_map__attach_struct_ops_opts(skel->maps.test_bpf_oom, &opts); + if (!link2) { + err = -errno; + CHECK_FAIL(err); + goto cleanup; + } + + /* Unfreeze all child tasks and create the memory pressure */ + err = run_and_wait_for_oom(); + CHECK_FAIL(err); + +cleanup: + bpf_link__destroy(link1); + bpf_link__destroy(link2); + write_cgroup_file(cgroups[0].path, "cgroup.kill", "1"); + write_cgroup_file(cgroups[0].path, "cgroup.freeze", "0"); + cleanup_cgroup_environment(); + test_oom__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/test_oom.c b/tools/testing/selftests/bpf/progs/test_oom.c new file mode 100644 index 000000000000..7ff354e416bc --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_oom.c @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +char _license[] SEC("license") = "GPL"; + +#define OOM_SCORE_ADJ_MIN (-1000) + +static bool mem_cgroup_killable(struct mem_cgroup *memcg) +{ + struct task_struct *task; + bool ret = true; + + bpf_for_each(css_task, task, &memcg->css, CSS_TASK_ITER_PROCS) + if (task->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) + return false; + + return ret; +} + +/* + * Find the largest leaf cgroup (ignoring page cache) without unkillable tasks + * and kill all belonging tasks. + */ +SEC("struct_ops.s/handle_out_of_memory") +int BPF_PROG(test_out_of_memory, struct oom_control *oc, struct bpf_struct_ops_link *link) +{ + struct task_struct *task; + struct mem_cgroup *root_memcg = oc->memcg; + struct mem_cgroup *memcg, *victim = NULL; + struct cgroup_subsys_state *css_pos, *css; + unsigned long usage, max_usage = 0; + unsigned long pagecache = 0; + int ret = 0; + + if (root_memcg) + root_memcg = bpf_get_mem_cgroup(&root_memcg->css); + else + root_memcg = bpf_get_root_mem_cgroup(); + + if (!root_memcg) + return 0; + + css = &root_memcg->css; + if (css && css->cgroup == link->cgroup) + goto exit; + + bpf_rcu_read_lock(); + bpf_for_each(css, css_pos, &root_memcg->css, BPF_CGROUP_ITER_DESCENDANTS_POST) { + if (css_pos->cgroup->nr_descendants + css_pos->cgroup->nr_dying_descendants) + continue; + + memcg = bpf_get_mem_cgroup(css_pos); + if (!memcg) + continue; + + usage = bpf_mem_cgroup_usage(memcg); + pagecache = bpf_mem_cgroup_page_state(memcg, NR_FILE_PAGES); + + if (usage > pagecache) + usage -= pagecache; + else + usage = 0; + + if ((usage > max_usage) && mem_cgroup_killable(memcg)) { + max_usage = usage; + if (victim) + bpf_put_mem_cgroup(victim); + victim = bpf_get_mem_cgroup(&memcg->css); + } + + bpf_put_mem_cgroup(memcg); + } + bpf_rcu_read_unlock(); + + if (!victim) + goto exit; + + bpf_for_each(css_task, task, &victim->css, CSS_TASK_ITER_PROCS) { + struct task_struct *t = bpf_task_acquire(task); + + if (t) { + /* + * If the task is already an OOM victim, it will + * quit soon and release some memory. + */ + if (bpf_task_is_oom_victim(task)) { + bpf_task_release(t); + ret = 1; + break; + } + + bpf_oom_kill_process(oc, task, "bpf oom test"); + bpf_task_release(t); + ret = 1; + } + } + + bpf_put_mem_cgroup(victim); +exit: + bpf_put_mem_cgroup(root_memcg); + + return ret; +} + +SEC(".struct_ops.link") +struct bpf_oom_ops test_bpf_oom = { + .name = "bpf_test_policy", + .handle_out_of_memory = (void *)test_out_of_memory, +}; -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:15 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Add a trace point to psi_avgs_work(). It can be used to attach a bpf handler which can monitor PSI values system-wide or for specific cgroup(s) and potentially perform some actions, e.g. declare an OOM. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/trace/events/psi.h | 27 +++++++++++++++++++++++++++ kernel/sched/psi.c | 6 ++++++ 2 files changed, 33 insertions(+) create mode 100644 include/trace/events/psi.h diff --git a/include/trace/events/psi.h b/include/trace/events/psi.h new file mode 100644 index 000000000000..57c46de18616 --- /dev/null +++ b/include/trace/events/psi.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM psi + +#if !defined(_TRACE_PSI_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_PSI_H + +#include <linux/tracepoint.h> + +TRACE_EVENT(psi_avgs_work, + TP_PROTO(struct psi_group *group), + TP_ARGS(group), + TP_STRUCT__entry( + __field(struct psi_group *, group) + ), + + TP_fast_assign( + __entry->group = group; + ), + + TP_printk("group=%p", __entry->group) +); + +#endif /* _TRACE_PSI_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 59fdb7ebbf22..72757ba2ed96 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -141,6 +141,10 @@ #include <linux/psi.h> #include "sched.h" +#define CREATE_TRACE_POINTS +#include <trace/events/psi.h> +#undef CREATE_TRACE_POINTS + static int psi_bug __read_mostly; DEFINE_STATIC_KEY_FALSE(psi_disabled); @@ -607,6 +611,8 @@ static void psi_avgs_work(struct work_struct *work) group->avg_next_update - now) + 1); } + trace_psi_avgs_work(group); + mutex_unlock(&group->avgs_lock); } -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:16 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
To allow a more efficient filtering of cgroups in the psi work tracepoint handler, let's add a u64 cgroup_id field to the psi_group structure. For system PSI, 0 will be used. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/psi_types.h | 4 ++++ kernel/sched/psi.c | 1 + 2 files changed, 5 insertions(+) diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h index dd10c22299ab..749a08d48abd 100644 --- a/include/linux/psi_types.h +++ b/include/linux/psi_types.h @@ -159,6 +159,10 @@ struct psi_trigger { struct psi_group { struct psi_group *parent; + + /* Cgroup id for cgroup PSI, 0 for system PSI */ + u64 cgroup_id; + bool enabled; /* Protects data used by the aggregator */ diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 72757ba2ed96..cf1ec4dc242b 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -1124,6 +1124,7 @@ int psi_cgroup_alloc(struct cgroup *cgroup) if (!cgroup->psi) return -ENOMEM; + cgroup->psi->cgroup_id = cgroup_id(cgroup); cgroup->psi->pcpu = alloc_percpu(struct psi_group_cpu); if (!cgroup->psi->pcpu) { kfree(cgroup->psi); -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:17 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Allow calling bpf_out_of_memory() from a PSI tracepoint to enable PSI-based OOM killer policies. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- mm/oom_kill.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 53f9f9674658..276cf8a34449 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1421,6 +1421,13 @@ BTF_KFUNCS_START(bpf_declare_oom_kfuncs) BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE) BTF_KFUNCS_END(bpf_declare_oom_kfuncs) +BTF_ID_LIST(bpf_oom_trace_ids) +#ifdef CONFIG_PSI +BTF_ID(typedef, btf_trace_psi_avgs_work) +#else +BTF_ID_UNUSED +#endif + static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id) { if (!btf_id_set8_contains(&bpf_declare_oom_kfuncs, kfunc_id)) @@ -1430,7 +1437,8 @@ static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_i prog->aux->attach_btf_id == bpf_oom_ops_ids[0]) return -EACCES; - if (prog->type == BPF_PROG_TYPE_TRACING) + if (prog->type == BPF_PROG_TYPE_TRACING && + prog->aux->attach_btf_id != bpf_oom_trace_ids[0]) return -EACCES; return 0; -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:18 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
[ ... ] ^^^^^^^^^^^^^^^^^^^^^^^^ This isn't a bug, but should this say "Buffer to store data read from the file" or "Buffer to read into"? The current wording suggests data is read from the buffer, but the code reads from the file into the buffer. Compare with write_cgroup_file() which says "Buffer to write to the file" - that phrasing works for writes but the inverse doesn't quite work for reads. [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21382620878 AI-authorship-score: low AI-authorship-explanation: The code closely follows existing patterns in the file with a minor documentation error typical of copy-paste adaptation, suggesting human authorship. issues-found: 1 issue-severity-score: low issue-severity-explanation: Minor documentation error in test code where @buf description says the opposite of what the code does.
{ "author": "bot+bpf-ci@kernel.org", "date": "Tue, 27 Jan 2026 03:08:46 +0000 (UTC)", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
[ ... ] Can this code access st_link after it has been freed? Once st_link->cgroup is set to NULL, if another CPU concurrently closes the link FD, bpf_struct_ops_map_link_dealloc() will see st_link->cgroup as NULL, skip the cgroup_bpf_detach_struct_ops() call, and proceed to kfree(st_link). Then the bpf_link_inc_not_zero() call here would access freed memory. The race window is between setting st_link->cgroup = NULL and calling bpf_link_inc_not_zero(). During this window: CPU 0 (cgroup_bpf_release) CPU 1 (close link FD) --------------------------- --------------------- st_link->cgroup = NULL bpf_link_put_direct() bpf_link_free() bpf_struct_ops_map_link_dealloc() if (st_link->cgroup) // false kfree(st_link) // freed! bpf_link_inc_not_zero() // use-after-free Would it be safer to call bpf_link_inc_not_zero() before modifying st_link fields? That way, if the link is already at refcount zero, the loop would skip it; otherwise, the held reference would prevent the concurrent free. --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21382620878 AI-authorship-score: low AI-authorship-explanation: The commit is authored by a known kernel maintainer with a consistent, human-like technical writing style and appropriate kernel development patterns. issues-found: 1 issue-severity-score: medium issue-severity-explanation: Use-after-free race condition in cgroup_bpf_release() that can cause kernel crash when link FD is closed concurrently with cgroup deletion.
{ "author": "bot+bpf-ci@kernel.org", "date": "Tue, 27 Jan 2026 03:08:48 +0000 (UTC)", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
[ ... ] The code reads flags and target_fd from opts, but the expected_revision field declared in struct bpf_struct_ops_opts is never read. Looking at similar opts structures (bpf_cgroup_opts, bpf_tcx_opts, bpf_netkit_opts), they all pass expected_revision to bpf_link_create_opts. For example: link_create_opts.cgroup.expected_revision = OPTS_GET(opts, expected_revision, 0); However, bpf_link_create_opts has no struct_ops sub-struct to receive expected_revision, and the kernel's bpf_attr link_create union has no struct_ops.expected_revision field either. Is expected_revision in bpf_struct_ops_opts intentionally unused? If so, should the field be removed from the struct to avoid user confusion? If users set expected_revision expecting it to be honored, the value is silently ignored. [ ... ] ^^^^^^^^^^^^^^^^^ The expected_revision field is declared here, matching the pattern used in bpf_cgroup_opts, bpf_tcx_opts, and bpf_netkit_opts. However, unlike those structures where expected_revision is read and passed through, this field has no corresponding code path in bpf_map__attach_struct_ops_opts(). [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21382620878 AI-authorship-score: low AI-authorship-explanation: The commit follows kernel conventions and existing code patterns, with an oversight (unused expected_revision field) more typical of human development than AI-generated code. issues-found: 1 issue-severity-score: low issue-severity-explanation: The expected_revision field in bpf_struct_ops_opts is declared but never used, causing silent data loss if users set this field expecting it to be honored.
{ "author": "bot+bpf-ci@kernel.org", "date": "Tue, 27 Jan 2026 03:08:49 +0000 (UTC)", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:47 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Since both fdinfo and link_info show the cgroup ID, why not use BPF_F_CGROUP_ID for better alignment? We may need to support other structs in the future. Could we implement a more generic solution, such as: int type; // cgroup, task, etc void *private; // ptr to type-specific data -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 13:49:18 +0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:46 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Feel free to add: Acked-by: Yafang Shao <laoar.shao@gmail.com> -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 13:50:31 +0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:53 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Acked-by: Yafang Shao <laoar.shao@gmail.com> -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 13:52:41 +0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Acked-by: Yafang Shao <laoar.shao@gmail.com> -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 14:06:20 +0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Given that mem_cgroup_ino() pairs with mem_cgroup_get_from_ino(), should we also define mem_cgroup_ino() outside CONFIG_SHRINKER_DEBUG? -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 14:12:17 +0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Mon 26-01-26 18:44:03, Roman Gushchin wrote: Are you planning to write any highlevel documentation on how to use the existing infrastructure to implement proper/correct OOM handlers with these generic interfaces? -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Tue, 27 Jan 2026 10:02:38 +0100", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Mon 26-01-26 18:44:10, Roman Gushchin wrote: I still find this dual reporting a bit confusing. I can see your intention in having a pre-defined "releasers" of the memory to trust BPF handlers more but they do have access to oc->bpf_memory_freed so they can manipulate it. Therefore an additional level of protection is rather weak. It is also not really clear to me how this works while there is OOM victim on the way out. (i.e. tsk_is_oom_victim() -> abort case). This will result in no killing therefore no bpf_memory_freed, right? Handler itself should consider its work done. How exactly is this handled. Also is there any way to handle the oom by increasing the memcg limit? I do not see a callback for that. -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Tue, 27 Jan 2026 10:38:42 +0100", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On 1/26/26 6:44 PM, Roman Gushchin wrote: The filter callback is registered for BPF_PROG_TYPE_STRUCT_OPS. It is checking if a kfunc_id is allowed for other struct_ops progs also, e.g. the bpf-tcp-cc struct_ops progs. The 'return -EACCES' should be the cause of the "calling kernel function XXX is not allowed" error reported by the CI. Take a look at btf_kfunc_is_allowed(). Take a look at bpf_qdisc_kfunc_filter(). I suspect it should be something like this, untested: if (btf_id_set8_contains(&bpf_oom_kfuncs, kfunc_id) && prog->aux->st_ops != &bpf_oom_bpf_ops) return -EACCES; return 0;
{ "author": "Martin KaFai Lau <martin.lau@linux.dev>", "date": "Tue, 27 Jan 2026 12:21:03 -0800", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Martin KaFai Lau <martin.lau@linux.dev> writes: Oh, I see.. It's a bit surprising that these .filter() functions have non-local effects... Will fix in v4. Thank you, Martin!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Tue, 27 Jan 2026 20:47:11 +0000", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Michal Hocko <mhocko@suse.com> writes: What do you expect from such a document, can you, please, elaborate? I'm asking because the main promise of bpf is to provide some sort of a safe playground, so anyone can experiment with writing their bpf implementations (like sched_ext schedulers or bpf oom policies) with minimum risk. Yes, it might work sub-optimally and kill too many tasks, but it won't crash or deadlock the system. So in way I don't want to prescribe the "right way" of writing oom handler, but it totally makes sense to provide an example. As of now the best way to get an example of a bpf handler is to look into the commit "[PATCH bpf-next v3 12/17] bpf: selftests: BPF OOM struct ops test". Another viable idea (also suggested by Andrew Morton) is to develop a production ready memcg-aware OOM killer in BPF, put the source code into the kernel tree and make it loadable by default (obviously under a config option). Myself or one of my colleagues will try to explore it a bit later: the tricky part is this by-default loading because there are no existing precedents. Thanks!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Tue, 27 Jan 2026 21:01:48 +0000", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Michal Hocko <mhocko@suse.com> writes: No, they can't. They have only a read-only access. It's a good question, I see your point... Basically we want to give a handler an option to exit with "I promise, some memory will be freed soon" without doing anything destructive. But keeping it save at the same time. I don't have a perfect answer out of my head, maybe some sort of a rate-limiter/counter might work? E.g. a handler can promise this N times before the kernel kicks in? Any ideas? There is no kfunc yet, but it's a good idea (which we accidentally discussed few days ago). I'll implement it. Thank you!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Tue, 27 Jan 2026 21:12:56 +0000", "thread_id": "CAADnVQL5g8imKNGbHGQ4HA8_qNT4MYwM8P3aCTFUG7uwiuTeuw@mail.gmail.com.mbox.gz" }