source
large_stringclasses
2 values
subject
large_stringclasses
112 values
code
large_stringclasses
112 values
critique
large_stringlengths
61
3.04M
metadata
dict
lkml
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
When freeing or unsharing page tables we send an IPI to synchronize with concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast that IPI to all CPUs, which is costly on large machines and hurts RT workloads[1]. This series makes those IPIs targeted. We track which CPUs are currently doing a lockless page table walk for a given mm (per-CPU active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs. GUP-fast and perf_get_page_size() set/clear the tracker around their walk; tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in the free/unshare paths. On x86, when the TLB flush path already sends IPIs (native without INVLPGB, or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops so each backend can declare whether its flush_tlb_multi sends real IPIs; if so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during hugetlb unshare. David Hildenbrand did the initial implementation. I built on his work and relied on off-list discussions to push it further - thanks a lot David! [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/ v3 -> v4: - Rework based on David's two-step direction and per-CPU idea: 1) Targeted IPIs: per-CPU variable when entering/leaving lockless page table walk; tlb_remove_table_sync_mm() IPIs only those CPUs. 2) On x86, pv_mmu_ops property set at init to skip the extra sync when flush_tlb_multi() already sends IPIs. https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/ - https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/ v2 -> v3: - Complete rewrite: use dynamic IPI tracking instead of static checks (per Dave Hansen, thanks!) - Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when actually sending IPIs - Motivation for skipping redundant IPIs explained by David: https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/ - https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/ v1 -> v2: - Fix cover letter encoding to resolve send-email issues. Apologies for any email flood caused by the failed send attempts :( RFC -> v1: - Use a callback function in pv_mmu_ops instead of comparing function pointers (per David) - Embed the check directly in tlb_remove_table_sync_one() instead of requiring every caller to check explicitly (per David) - Move tlb_table_flush_implies_ipi_broadcast() outside of CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures that don't enable this config. https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/ - https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/ Lance Yang (3): mm: use targeted IPIs for TLB sync with lockless page table walkers mm: switch callers to tlb_remove_table_sync_mm() x86/tlb: add architecture-specific TLB IPI optimization support arch/x86/hyperv/mmu.c | 5 ++ arch/x86/include/asm/paravirt.h | 5 ++ arch/x86/include/asm/paravirt_types.h | 6 +++ arch/x86/include/asm/tlb.h | 20 +++++++- arch/x86/kernel/kvm.c | 6 +++ arch/x86/kernel/paravirt.c | 18 +++++++ arch/x86/kernel/smpboot.c | 1 + arch/x86/xen/mmu_pv.c | 2 + include/asm-generic/tlb.h | 28 +++++++++-- include/linux/mm.h | 34 +++++++++++++ kernel/events/core.c | 2 + mm/gup.c | 2 + mm/khugepaged.c | 2 +- mm/mmu_gather.c | 69 ++++++++++++++++++++++++--- 14 files changed, 187 insertions(+), 13 deletions(-) -- 2.49.0
On 2/2/26 04:14, Lance Yang wrote: I thought the big databases were really sensitive to GUP-fast latency. They like big systems, too. Won't they howl when this finally hits their testing? Also, two of the "write" side here are: * collapse_huge_page() (khugepaged) * tlb_remove_table() (in an "-ENOMEM" path) Those are quite slow paths, right? Shouldn't the design here favor keeping gup-fast as fast as possible as opposed to impacting those?
{ "author": "Dave Hansen <dave.hansen@intel.com>", "date": "Mon, 2 Feb 2026 08:20:13 -0800", "thread_id": "be38af98-e344-4552-a77b-b5345135e382@intel.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
The register space described by DT node of compatible mediatek,mt8365-infracfg-nao exposes a variety of unrelated registers, including registers for controlling bus protection on the MT8365 SoC, which is used by the power domain controller through a syscon. Add this compatible to the syscon binding. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/mfd/syscon.yaml b/Documentation/devicetree/bindings/mfd/syscon.yaml index c6bbb19c3e3e2245b4a823df06e7f361da311000..f655ec18cc2d96028d17e19d704b62f6d898fea4 100644 --- a/Documentation/devicetree/bindings/mfd/syscon.yaml +++ b/Documentation/devicetree/bindings/mfd/syscon.yaml @@ -190,6 +190,7 @@ properties: - mediatek,mt8135-pctl-a-syscfg - mediatek,mt8135-pctl-b-syscfg - mediatek,mt8173-pctl-a-syscfg + - mediatek,mt8365-infracfg-nao - mediatek,mt8365-syscfg - microchip,lan966x-cpu-syscon - microchip,mpfs-sysreg-scb -- 2.49.0
{ "author": "=?utf-8?q?N=C3=ADcolas_F=2E_R=2E_A=2E_Prado?= <nfraprado@collabora.com>", "date": "Fri, 02 May 2025 12:43:21 -0400", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
The infracfg-nao register space at 0x1020e000 has different registers than the infracfg space at 0x10001000, and most importantly, doesn't contain any clock controls. Therefore it shouldn't use the same compatible used for the mt8365 infracfg clocks driver: mediatek,mt8365-infracfg. Since it currently does, probe errors are reported in the kernel logs: [ 0.245959] Failed to register clk ifr_pmic_tmr: -EEXIST [ 0.245998] clk-mt8365 1020e000.infracfg: probe with driver clk-mt8365 failed with error -17 This register space is used only as a syscon for bus control by the power domain controller, so in order to properly describe it and fix the errors, set its compatible to a distinct compatible used exclusively as a syscon, drop the clock-cells, and while at it rename the node to 'syscon' following the naming convention. Fixes: 6ff945376556 ("arm64: dts: mediatek: Initial mt8365-evk support") Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/boot/dts/mediatek/mt8365.dtsi b/arch/arm64/boot/dts/mediatek/mt8365.dtsi index e6d2b3221a3b7a855129258b379ae4bc2fd05449..49ad4dee9c4cf563743dc55d5e0b055cfb69986a 100644 --- a/arch/arm64/boot/dts/mediatek/mt8365.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8365.dtsi @@ -495,10 +495,9 @@ iommu: iommu@10205000 { #iommu-cells = <1>; }; - infracfg_nao: infracfg@1020e000 { - compatible = "mediatek,mt8365-infracfg", "syscon"; + infracfg_nao: syscon@1020e000 { + compatible = "mediatek,mt8365-infracfg-nao", "syscon"; reg = <0 0x1020e000 0 0x1000>; - #clock-cells = <1>; }; rng: rng@1020f000 { -- 2.49.0
{ "author": "=?utf-8?q?N=C3=ADcolas_F=2E_R=2E_A=2E_Prado?= <nfraprado@collabora.com>", "date": "Fri, 02 May 2025 12:43:22 -0400", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
Il 02/05/25 18:43, Nícolas F. R. A. Prado ha scritto: Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
{ "author": "AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>", "date": "Tue, 6 May 2025 10:26:48 +0200", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
Il 02/05/25 18:43, Nícolas F. R. A. Prado ha scritto: Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
{ "author": "AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>", "date": "Tue, 6 May 2025 10:26:49 +0200", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
On Fri, May 02, 2025 at 12:43:21PM -0400, Ncolas F. R. A. Prado wrote: Acked-by: Conor Dooley <conor.dooley@microchip.com>
{ "author": "Conor Dooley <conor@kernel.org>", "date": "Tue, 6 May 2025 17:30:22 +0100", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
On Fri, 02 May 2025 12:43:21 -0400, Nícolas F. R. A. Prado wrote: Applied, thanks! [1/2] dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao commit: cbb005b91726ea1024b6261bc1062bac19f6d059 -- Lee Jones [李琼斯]
{ "author": "Lee Jones <lee@kernel.org>", "date": "Tue, 13 May 2025 10:48:51 +0100", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
On 5/2/25 11:43 AM, Nícolas F. R. A. Prado wrote: Reviewed-by: David Lechner <dlechner@baylibre.com> It looks like this never got picked up. I noticed this was a problem in U-Boot because it was registering this as a clock provider. And I sent a similar patch [1] recently that has also not been acted on yet. I prefer this patch since it also fixes the node name to use a standard name. Who should be responsible for actually picking up the patch? [1]: https://lore.kernel.org/linux-mediatek/20251216-mtk-fix-infracfg_nao-compatibile-v1-1-d339b151ac81@baylibre.com/
{ "author": "David Lechner <dlechner@baylibre.com>", "date": "Mon, 2 Feb 2026 11:20:30 -0600", "thread_id": "25bc9ae2-5c27-407a-aae4-6c619367664a@baylibre.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Modify online_memory_block() to accept the online type through its arg parameter rather than calling mhp_get_default_online_type() internally. This prepares for allowing callers to specify explicit online types. Update the caller in add_memory_resource() to pass the default online type via a local variable. No functional change. Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- mm/memory_hotplug.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index bc805029da51..87796b617d9e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1337,7 +1337,9 @@ static int check_hotplug_memory_range(u64 start, u64 size) static int online_memory_block(struct memory_block *mem, void *arg) { - mem->online_type = mhp_get_default_online_type(); + int *online_type = arg; + + mem->online_type = *online_type; return device_online(&mem->dev); } @@ -1578,8 +1580,12 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) merge_system_ram_resource(res); /* online pages if requested */ - if (mhp_get_default_online_type() != MMOP_OFFLINE) - walk_memory_blocks(start, size, NULL, online_memory_block); + if (mhp_get_default_online_type() != MMOP_OFFLINE) { + int online_type = mhp_get_default_online_type(); + + walk_memory_blocks(start, size, &online_type, + online_memory_block); + } return ret; error: -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:34 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Enable dax kmem driver to select how to online the memory rather than implicitly depending on the system default. This will allow users of dax to plumb through a preferred auto-online policy for their region. Refactor and new interface: Add __add_memory_driver_managed() which accepts an explicit online_type and export mhp_get_default_online_type() so callers can pass it when they want the default behavior. Refactor: Extract __add_memory_resource() to take an explicit online_type parameter, and update add_memory_resource() to pass the system default. No functional change for existing users. Cc: David Hildenbrand <david@kernel.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- include/linux/memory_hotplug.h | 3 ++ mm/memory_hotplug.c | 91 ++++++++++++++++++++++++---------- 2 files changed, 67 insertions(+), 27 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index f2f16cdd73ee..1eb63d1a247d 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -293,6 +293,9 @@ extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); extern int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); extern int add_memory_resource(int nid, struct resource *resource, mhp_t mhp_flags); +int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags, + int online_type); extern int add_memory_driver_managed(int nid, u64 start, u64 size, const char *resource_name, mhp_t mhp_flags); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 87796b617d9e..d3ca95b872bd 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -239,6 +239,7 @@ int mhp_get_default_online_type(void) return mhp_default_online_type; } +EXPORT_SYMBOL_GPL(mhp_get_default_online_type); void mhp_set_default_online_type(int online_type) { @@ -1490,7 +1491,8 @@ static int create_altmaps_and_memory_blocks(int nid, struct memory_group *group, * * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */ -int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) +static int __add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags, + int online_type) { struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) }; enum memblock_flags memblock_flags = MEMBLOCK_NONE; @@ -1580,12 +1582,9 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) merge_system_ram_resource(res); /* online pages if requested */ - if (mhp_get_default_online_type() != MMOP_OFFLINE) { - int online_type = mhp_get_default_online_type(); - + if (online_type != MMOP_OFFLINE) walk_memory_blocks(start, size, &online_type, online_memory_block); - } return ret; error: @@ -1601,7 +1600,13 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) return ret; } -/* requires device_hotplug_lock, see add_memory_resource() */ +int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) +{ + return __add_memory_resource(nid, res, mhp_flags, + mhp_get_default_online_type()); +} + +/* requires device_hotplug_lock, see __add_memory_resource() */ int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags) { struct resource *res; @@ -1629,29 +1634,24 @@ int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags) } EXPORT_SYMBOL_GPL(add_memory); -/* - * Add special, driver-managed memory to the system as system RAM. Such - * memory is not exposed via the raw firmware-provided memmap as system - * RAM, instead, it is detected and added by a driver - during cold boot, - * after a reboot, and after kexec. - * - * Reasons why this memory should not be used for the initial memmap of a - * kexec kernel or for placing kexec images: - * - The booting kernel is in charge of determining how this memory will be - * used (e.g., use persistent memory as system RAM) - * - Coordination with a hypervisor is required before this memory - * can be used (e.g., inaccessible parts). +/** + * __add_memory_driver_managed - add driver-managed memory with explicit online_type + * @nid: NUMA node ID where the memory will be added + * @start: Start physical address of the memory range + * @size: Size of the memory range in bytes + * @resource_name: Resource name in format "System RAM ($DRIVER)" + * @mhp_flags: Memory hotplug flags + * @online_type: Online behavior (MMOP_ONLINE, MMOP_ONLINE_KERNEL, + * MMOP_ONLINE_MOVABLE, or MMOP_OFFLINE) * - * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided - * memory map") are created. Also, the created memory resource is flagged - * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case - * this memory as well (esp., not place kexec images onto it). + * Add driver-managed memory with explicit online_type specification. + * The resource_name must have the format "System RAM ($DRIVER)". * - * The resource_name (visible via /proc/iomem) has to have the format - * "System RAM ($DRIVER)". + * Return: 0 on success, negative error code on failure. */ -int add_memory_driver_managed(int nid, u64 start, u64 size, - const char *resource_name, mhp_t mhp_flags) +int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags, + int online_type) { struct resource *res; int rc; @@ -1661,6 +1661,9 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, resource_name[strlen(resource_name) - 1] != ')') return -EINVAL; + if (online_type < 0 || online_type > MMOP_ONLINE_MOVABLE) + return -EINVAL; + lock_device_hotplug(); res = register_memory_resource(start, size, resource_name); @@ -1669,7 +1672,7 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, goto out_unlock; } - rc = add_memory_resource(nid, res, mhp_flags); + rc = __add_memory_resource(nid, res, mhp_flags, online_type); if (rc < 0) release_memory_resource(res); @@ -1677,6 +1680,40 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, unlock_device_hotplug(); return rc; } +EXPORT_SYMBOL_FOR_MODULES(__add_memory_driver_managed, "kmem"); + +/* + * Add special, driver-managed memory to the system as system RAM. Such + * memory is not exposed via the raw firmware-provided memmap as system + * RAM, instead, it is detected and added by a driver - during cold boot, + * after a reboot, and after kexec. + * + * Reasons why this memory should not be used for the initial memmap of a + * kexec kernel or for placing kexec images: + * - The booting kernel is in charge of determining how this memory will be + * used (e.g., use persistent memory as system RAM) + * - Coordination with a hypervisor is required before this memory + * can be used (e.g., inaccessible parts). + * + * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided + * memory map") are created. Also, the created memory resource is flagged + * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case + * this memory as well (esp., not place kexec images onto it). + * + * The resource_name (visible via /proc/iomem) has to have the format + * "System RAM ($DRIVER)". + * + * Memory will be onlined using the system default online type. + * + * Returns 0 on success, negative error code on failure. + */ +int add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags) +{ + return __add_memory_driver_managed(nid, start, size, resource_name, + mhp_flags, + mhp_get_default_online_type()); +} EXPORT_SYMBOL_GPL(add_memory_driver_managed); /* -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:35 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
There is no way for drivers leveraging dax_kmem to plumb through a preferred auto-online policy - the system default policy is forced. Add online_type field to DAX device creation path to allow drivers to specify an auto-online policy when using the kmem driver. Current callers initialize online_type to mhp_get_default_online_type() which resolves to the system default (memhp_default_online_type). No functional change to existing drivers. Cc:David Hildenbrand <david@kernel.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/region.c | 2 ++ drivers/cxl/cxl.h | 1 + drivers/dax/bus.c | 3 +++ drivers/dax/bus.h | 1 + drivers/dax/cxl.c | 1 + drivers/dax/dax-private.h | 2 ++ drivers/dax/hmem/hmem.c | 2 ++ drivers/dax/kmem.c | 13 +++++++++++-- drivers/dax/pmem.c | 2 ++ 9 files changed, 25 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 5bd1213737fa..eef5d5fe3f95 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2022 Intel Corporation. All rights reserved. */ #include <linux/memregion.h> +#include <linux/memory_hotplug.h> #include <linux/genalloc.h> #include <linux/debugfs.h> #include <linux/device.h> @@ -3459,6 +3460,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr) if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); + cxlr_dax->online_type = mhp_get_default_online_type(); dev = &cxlr_dax->dev; rc = dev_set_name(dev, "dax_region%d", cxlr->id); if (rc) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ba17fa86d249..07d57d13f4c7 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -591,6 +591,7 @@ struct cxl_dax_region { struct device dev; struct cxl_region *cxlr; struct range hpa_range; + int online_type; /* MMOP_ value for kmem driver */ }; /** diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c index fde29e0ad68b..121a6dd0afe7 100644 --- a/drivers/dax/bus.c +++ b/drivers/dax/bus.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2017-2018 Intel Corporation. All rights reserved. */ #include <linux/memremap.h> +#include <linux/memory_hotplug.h> #include <linux/device.h> #include <linux/mutex.h> #include <linux/list.h> @@ -395,6 +396,7 @@ static ssize_t create_store(struct device *dev, struct device_attribute *attr, .size = 0, .id = -1, .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; struct dev_dax *dev_dax = __devm_create_dev_dax(&data); @@ -1494,6 +1496,7 @@ static struct dev_dax *__devm_create_dev_dax(struct dev_dax_data *data) ida_init(&dev_dax->ida); dev_dax->memmap_on_memory = data->memmap_on_memory; + dev_dax->online_type = data->online_type; inode = dax_inode(dax_dev); dev->devt = inode->i_rdev; diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index cbbf64443098..4ac92a4edfe7 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -24,6 +24,7 @@ struct dev_dax_data { resource_size_t size; int id; bool memmap_on_memory; + int online_type; /* MMOP_ value for kmem driver */ }; struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data); diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 13cd94d32ff7..856a0cd24f3b 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -27,6 +27,7 @@ static int cxl_dax_region_probe(struct device *dev) .id = -1, .size = range_len(&cxlr_dax->hpa_range), .memmap_on_memory = true, + .online_type = cxlr_dax->online_type, }; return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data)); diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index c6ae27c982f4..9559718cc988 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -77,6 +77,7 @@ struct dev_dax_range { * @dev: device core * @pgmap: pgmap for memmap setup / lifetime (driver owned) * @memmap_on_memory: allow kmem to put the memmap in the memory + * @online_type: MMOP_* online type for memory hotplug * @nr_range: size of @ranges * @ranges: range tuples of memory used */ @@ -91,6 +92,7 @@ struct dev_dax { struct device dev; struct dev_pagemap *pgmap; bool memmap_on_memory; + int online_type; int nr_range; struct dev_dax_range *ranges; }; diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c index c18451a37e4f..119914b08fd9 100644 --- a/drivers/dax/hmem/hmem.c +++ b/drivers/dax/hmem/hmem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/platform_device.h> +#include <linux/memory_hotplug.h> #include <linux/memregion.h> #include <linux/module.h> #include <linux/dax.h> @@ -36,6 +37,7 @@ static int dax_hmem_probe(struct platform_device *pdev) .id = -1, .size = region_idle ? 0 : range_len(&mri->range), .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data)); diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index c036e4d0b610..550dc605229e 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -16,6 +16,11 @@ #include "dax-private.h" #include "bus.h" +/* Internal function exported only to kmem module */ +extern int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, + mhp_t mhp_flags, int online_type); + /* * Default abstract distance assigned to the NUMA node onlined * by DAX/kmem if the low level platform driver didn't initialize @@ -72,6 +77,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) struct dax_kmem_data *data; struct memory_dev_type *mtype; int i, rc, mapped = 0; + int online_type; mhp_t mhp_flags; int numa_node; int adist = MEMTIER_DEFAULT_DAX_ADISTANCE; @@ -134,6 +140,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) goto err_reg_mgid; data->mgid = rc; + online_type = dev_dax->online_type; + for (i = 0; i < dev_dax->nr_range; i++) { struct resource *res; struct range range; @@ -174,8 +182,9 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) * Ensure that future kexec'd kernels will not treat * this as RAM automatically. */ - rc = add_memory_driver_managed(data->mgid, range.start, - range_len(&range), kmem_name, mhp_flags); + rc = __add_memory_driver_managed(data->mgid, range.start, + range_len(&range), kmem_name, mhp_flags, + online_type); if (rc) { dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n", diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index bee93066a849..a5925146b09f 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ +#include <linux/memory_hotplug.h> #include <linux/memremap.h> #include <linux/module.h> #include "../nvdimm/pfn.h" @@ -63,6 +64,7 @@ static struct dev_dax *__dax_pmem_probe(struct device *dev) .pgmap = &pgmap, .size = range_len(&range), .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; return devm_create_dev_dax(&data); -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:36 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Move the pmem region driver logic from region.c into pmem_region.c. No functional changes. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/pmem_region.c | 191 +++++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 184 ------------------------------- 4 files changed, 193 insertions(+), 184 deletions(-) create mode 100644 drivers/cxl/core/pmem_region.c diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 5ad8fef210b5..23269c81fd44 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -17,6 +17,7 @@ cxl_core-y += cdat.o cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o cxl_core-$(CONFIG_CXL_EDAC_MEM_FEATURES) += edac.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index dd987ef2def5..26991de12d76 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); +int devm_cxl_add_pmem_region(struct cxl_region *cxlr); #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, diff --git a/drivers/cxl/core/pmem_region.c b/drivers/cxl/core/pmem_region.c new file mode 100644 index 000000000000..81b66e548bb5 --- /dev/null +++ b/drivers/cxl/core/pmem_region.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. All rights reserved. */ +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_pmem_region_release(struct device *dev) +{ + struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev); + int i; + + for (i = 0; i < cxlr_pmem->nr_mappings; i++) { + struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd; + + put_device(&cxlmd->dev); + } + + kfree(cxlr_pmem); +} + +static const struct attribute_group *cxl_pmem_region_attribute_groups[] = { + &cxl_base_attribute_group, + NULL, +}; + +const struct device_type cxl_pmem_region_type = { + .name = "cxl_pmem_region", + .release = cxl_pmem_region_release, + .groups = cxl_pmem_region_attribute_groups, +}; +bool is_cxl_pmem_region(struct device *dev) +{ + return dev->type == &cxl_pmem_region_type; +} +EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL"); + +struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev), + "not a cxl_pmem_region device\n")) + return NULL; + return container_of(dev, struct cxl_pmem_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL"); +static struct lock_class_key cxl_pmem_region_key; + +static int cxl_pmem_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_nvdimm_bridge *cxl_nvb; + struct device *dev; + int i; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return -ENXIO; + + struct cxl_pmem_region *cxlr_pmem __free(kfree) = + kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL); + if (!cxlr_pmem) + return -ENOMEM; + + cxlr_pmem->hpa_range.start = p->res->start; + cxlr_pmem->hpa_range.end = p->res->end; + + /* Snapshot the region configuration underneath the cxl_rwsem.region */ + cxlr_pmem->nr_mappings = p->nr_targets; + for (i = 0; i < p->nr_targets; i++) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); + struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i]; + + /* + * Regions never span CXL root devices, so by definition the + * bridge for one device is the same for all. + */ + if (i == 0) { + cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint); + if (!cxl_nvb) + return -ENODEV; + cxlr->cxl_nvb = cxl_nvb; + } + m->cxlmd = cxlmd; + get_device(&cxlmd->dev); + m->start = cxled->dpa_res->start; + m->size = resource_size(cxled->dpa_res); + m->position = i; + } + + dev = &cxlr_pmem->dev; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_pmem_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_pmem_region_type; + cxlr_pmem->cxlr = cxlr; + cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem); + + return 0; +} + +static void cxlr_pmem_unregister(void *_cxlr_pmem) +{ + struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem; + struct cxl_region *cxlr = cxlr_pmem->cxlr; + struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; + + /* + * Either the bridge is in ->remove() context under the device_lock(), + * or cxlr_release_nvdimm() is cancelling the bridge's release action + * for @cxlr_pmem and doing it itself (while manually holding the bridge + * lock). + */ + device_lock_assert(&cxl_nvb->dev); + cxlr->cxlr_pmem = NULL; + cxlr_pmem->cxlr = NULL; + device_unregister(&cxlr_pmem->dev); +} + +static void cxlr_release_nvdimm(void *_cxlr) +{ + struct cxl_region *cxlr = _cxlr; + struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; + + scoped_guard(device, &cxl_nvb->dev) { + if (cxlr->cxlr_pmem) + devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, + cxlr->cxlr_pmem); + } + cxlr->cxl_nvb = NULL; + put_device(&cxl_nvb->dev); +} + +/** + * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge + * @cxlr: parent CXL region for this pmem region bridge device + * + * Return: 0 on success negative error code on failure. + */ +int devm_cxl_add_pmem_region(struct cxl_region *cxlr) +{ + struct cxl_pmem_region *cxlr_pmem; + struct cxl_nvdimm_bridge *cxl_nvb; + struct device *dev; + int rc; + + rc = cxl_pmem_region_alloc(cxlr); + if (rc) + return rc; + cxlr_pmem = cxlr->cxlr_pmem; + cxl_nvb = cxlr->cxl_nvb; + + dev = &cxlr_pmem->dev; + rc = dev_set_name(dev, "pmem_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + scoped_guard(device, &cxl_nvb->dev) { + if (cxl_nvb->dev.driver) + rc = devm_add_action_or_reset(&cxl_nvb->dev, + cxlr_pmem_unregister, + cxlr_pmem); + else + rc = -ENXIO; + } + + if (rc) + goto err_bridge; + + /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */ + return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr); + +err: + put_device(dev); +err_bridge: + put_device(&cxl_nvb->dev); + cxlr->cxl_nvb = NULL; + return rc; +} + + diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index e4097c464ed3..fc56f8f03805 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2747,46 +2747,6 @@ static ssize_t delete_region_store(struct device *dev, } DEVICE_ATTR_WO(delete_region); -static void cxl_pmem_region_release(struct device *dev) -{ - struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev); - int i; - - for (i = 0; i < cxlr_pmem->nr_mappings; i++) { - struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd; - - put_device(&cxlmd->dev); - } - - kfree(cxlr_pmem); -} - -static const struct attribute_group *cxl_pmem_region_attribute_groups[] = { - &cxl_base_attribute_group, - NULL, -}; - -const struct device_type cxl_pmem_region_type = { - .name = "cxl_pmem_region", - .release = cxl_pmem_region_release, - .groups = cxl_pmem_region_attribute_groups, -}; - -bool is_cxl_pmem_region(struct device *dev) -{ - return dev->type == &cxl_pmem_region_type; -} -EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL"); - -struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev) -{ - if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev), - "not a cxl_pmem_region device\n")) - return NULL; - return container_of(dev, struct cxl_pmem_region, dev); -} -EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL"); - struct cxl_poison_context { struct cxl_port *port; int part; @@ -3236,64 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, return -ENXIO; } -static struct lock_class_key cxl_pmem_region_key; - -static int cxl_pmem_region_alloc(struct cxl_region *cxlr) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_nvdimm_bridge *cxl_nvb; - struct device *dev; - int i; - - guard(rwsem_read)(&cxl_rwsem.region); - if (p->state != CXL_CONFIG_COMMIT) - return -ENXIO; - - struct cxl_pmem_region *cxlr_pmem __free(kfree) = - kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL); - if (!cxlr_pmem) - return -ENOMEM; - - cxlr_pmem->hpa_range.start = p->res->start; - cxlr_pmem->hpa_range.end = p->res->end; - - /* Snapshot the region configuration underneath the cxl_rwsem.region */ - cxlr_pmem->nr_mappings = p->nr_targets; - for (i = 0; i < p->nr_targets; i++) { - struct cxl_endpoint_decoder *cxled = p->targets[i]; - struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); - struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i]; - - /* - * Regions never span CXL root devices, so by definition the - * bridge for one device is the same for all. - */ - if (i == 0) { - cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint); - if (!cxl_nvb) - return -ENODEV; - cxlr->cxl_nvb = cxl_nvb; - } - m->cxlmd = cxlmd; - get_device(&cxlmd->dev); - m->start = cxled->dpa_res->start; - m->size = resource_size(cxled->dpa_res); - m->position = i; - } - - dev = &cxlr_pmem->dev; - device_initialize(dev); - lockdep_set_class(&dev->mutex, &cxl_pmem_region_key); - device_set_pm_not_required(dev); - dev->parent = &cxlr->dev; - dev->bus = &cxl_bus_type; - dev->type = &cxl_pmem_region_type; - cxlr_pmem->cxlr = cxlr; - cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem); - - return 0; -} - static void cxl_dax_region_release(struct device *dev) { struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); @@ -3357,92 +3259,6 @@ static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) return cxlr_dax; } -static void cxlr_pmem_unregister(void *_cxlr_pmem) -{ - struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem; - struct cxl_region *cxlr = cxlr_pmem->cxlr; - struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; - - /* - * Either the bridge is in ->remove() context under the device_lock(), - * or cxlr_release_nvdimm() is cancelling the bridge's release action - * for @cxlr_pmem and doing it itself (while manually holding the bridge - * lock). - */ - device_lock_assert(&cxl_nvb->dev); - cxlr->cxlr_pmem = NULL; - cxlr_pmem->cxlr = NULL; - device_unregister(&cxlr_pmem->dev); -} - -static void cxlr_release_nvdimm(void *_cxlr) -{ - struct cxl_region *cxlr = _cxlr; - struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; - - scoped_guard(device, &cxl_nvb->dev) { - if (cxlr->cxlr_pmem) - devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, - cxlr->cxlr_pmem); - } - cxlr->cxl_nvb = NULL; - put_device(&cxl_nvb->dev); -} - -/** - * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge - * @cxlr: parent CXL region for this pmem region bridge device - * - * Return: 0 on success negative error code on failure. - */ -static int devm_cxl_add_pmem_region(struct cxl_region *cxlr) -{ - struct cxl_pmem_region *cxlr_pmem; - struct cxl_nvdimm_bridge *cxl_nvb; - struct device *dev; - int rc; - - rc = cxl_pmem_region_alloc(cxlr); - if (rc) - return rc; - cxlr_pmem = cxlr->cxlr_pmem; - cxl_nvb = cxlr->cxl_nvb; - - dev = &cxlr_pmem->dev; - rc = dev_set_name(dev, "pmem_region%d", cxlr->id); - if (rc) - goto err; - - rc = device_add(dev); - if (rc) - goto err; - - dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), - dev_name(dev)); - - scoped_guard(device, &cxl_nvb->dev) { - if (cxl_nvb->dev.driver) - rc = devm_add_action_or_reset(&cxl_nvb->dev, - cxlr_pmem_unregister, - cxlr_pmem); - else - rc = -ENXIO; - } - - if (rc) - goto err_bridge; - - /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */ - return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr); - -err: - put_device(dev); -err_bridge: - put_device(&cxl_nvb->dev); - cxlr->cxl_nvb = NULL; - return rc; -} - static void cxlr_dax_unregister(void *_cxlr_dax) { struct cxl_dax_region *cxlr_dax = _cxlr_dax; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:38 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Move the CXL DAX region device infrastructure from region.c into a new dax_region.c file. No functional changes. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/dax_region.c | 113 ++++++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 102 ------------------------------ 4 files changed, 115 insertions(+), 102 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 23269c81fd44..36f284d7c500 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -17,6 +17,7 @@ cxl_core-y += cdat.o cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_REGION) += dax_region.o cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 26991de12d76..217dd708a2a6 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); +int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); #else diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c new file mode 100644 index 000000000000..0602db5f7248 --- /dev/null +++ b/drivers/cxl/core/dax_region.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright(c) 2022 Intel Corporation. All rights reserved. + * Copyright(c) 2026 Meta Technologies Inc. All rights reserved. + */ +#include <linux/memory_hotplug.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_dax_region_release(struct device *dev) +{ + struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); + + kfree(cxlr_dax); +} + +static const struct attribute_group *cxl_dax_region_attribute_groups[] = { + &cxl_base_attribute_group, + NULL, +}; + +const struct device_type cxl_dax_region_type = { + .name = "cxl_dax_region", + .release = cxl_dax_region_release, + .groups = cxl_dax_region_attribute_groups, +}; + +static bool is_cxl_dax_region(struct device *dev) +{ + return dev->type == &cxl_dax_region_type; +} + +struct cxl_dax_region *to_cxl_dax_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev), + "not a cxl_dax_region device\n")) + return NULL; + return container_of(dev, struct cxl_dax_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL"); + +static struct lock_class_key cxl_dax_region_key; + +static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_dax_region *cxlr_dax; + struct device *dev; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return ERR_PTR(-ENXIO); + + cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL); + if (!cxlr_dax) + return ERR_PTR(-ENOMEM); + + cxlr_dax->hpa_range.start = p->res->start; + cxlr_dax->hpa_range.end = p->res->end; + + dev = &cxlr_dax->dev; + cxlr_dax->cxlr = cxlr; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_dax_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_dax_region_type; + + return cxlr_dax; +} + +static void cxlr_dax_unregister(void *_cxlr_dax) +{ + struct cxl_dax_region *cxlr_dax = _cxlr_dax; + + device_unregister(&cxlr_dax->dev); +} + +int devm_cxl_add_dax_region(struct cxl_region *cxlr, + enum dax_driver_type dax_driver) +{ + struct cxl_dax_region *cxlr_dax; + struct device *dev; + int rc; + + cxlr_dax = cxl_dax_region_alloc(cxlr); + if (IS_ERR(cxlr_dax)) + return PTR_ERR(cxlr_dax); + + cxlr_dax->online_type = mhp_get_default_online_type(); + cxlr_dax->dax_driver = dax_driver; + dev = &cxlr_dax->dev; + rc = dev_set_name(dev, "dax_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, + cxlr_dax); +err: + put_device(dev); + return rc; +} diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index fc56f8f03805..61ec939c1462 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3196,108 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, return -ENXIO; } -static void cxl_dax_region_release(struct device *dev) -{ - struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); - - kfree(cxlr_dax); -} - -static const struct attribute_group *cxl_dax_region_attribute_groups[] = { - &cxl_base_attribute_group, - NULL, -}; - -const struct device_type cxl_dax_region_type = { - .name = "cxl_dax_region", - .release = cxl_dax_region_release, - .groups = cxl_dax_region_attribute_groups, -}; - -static bool is_cxl_dax_region(struct device *dev) -{ - return dev->type == &cxl_dax_region_type; -} - -struct cxl_dax_region *to_cxl_dax_region(struct device *dev) -{ - if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev), - "not a cxl_dax_region device\n")) - return NULL; - return container_of(dev, struct cxl_dax_region, dev); -} -EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL"); - -static struct lock_class_key cxl_dax_region_key; - -static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_dax_region *cxlr_dax; - struct device *dev; - - guard(rwsem_read)(&cxl_rwsem.region); - if (p->state != CXL_CONFIG_COMMIT) - return ERR_PTR(-ENXIO); - - cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL); - if (!cxlr_dax) - return ERR_PTR(-ENOMEM); - - cxlr_dax->hpa_range.start = p->res->start; - cxlr_dax->hpa_range.end = p->res->end; - - dev = &cxlr_dax->dev; - cxlr_dax->cxlr = cxlr; - device_initialize(dev); - lockdep_set_class(&dev->mutex, &cxl_dax_region_key); - device_set_pm_not_required(dev); - dev->parent = &cxlr->dev; - dev->bus = &cxl_bus_type; - dev->type = &cxl_dax_region_type; - - return cxlr_dax; -} - -static void cxlr_dax_unregister(void *_cxlr_dax) -{ - struct cxl_dax_region *cxlr_dax = _cxlr_dax; - - device_unregister(&cxlr_dax->dev); -} - -static int devm_cxl_add_dax_region(struct cxl_region *cxlr, - enum dax_driver_type dax_driver) -{ - struct cxl_dax_region *cxlr_dax; - struct device *dev; - int rc; - - cxlr_dax = cxl_dax_region_alloc(cxlr); - if (IS_ERR(cxlr_dax)) - return PTR_ERR(cxlr_dax); - - cxlr_dax->online_type = mhp_get_default_online_type(); - cxlr_dax->dax_driver = dax_driver; - dev = &cxlr_dax->dev; - rc = dev_set_name(dev, "dax_region%d", cxlr->id); - if (rc) - goto err; - - rc = device_add(dev); - if (rc) - goto err; - - dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), - dev_name(dev)); - - return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, - cxlr_dax); -err: - put_device(dev); - return rc; -} - static int match_decoder_by_range(struct device *dev, const void *data) { const struct range *r1, *r2 = data; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:39 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Add a new cxl_devdax_region driver that probes CXL regions in device dax mode and creates dax_region devices. This allows explicit binding to the device_dax dax driver instead of the kmem driver. Exports to_cxl_region() to core.h so it can be used by the driver. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/core.h | 2 ++ drivers/cxl/core/dax_region.c | 16 ++++++++++++++++ drivers/cxl/core/region.c | 21 +++++++++++++++++---- drivers/cxl/cxl.h | 1 + 4 files changed, 36 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 217dd708a2a6..ea4df8abc2ad 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -46,6 +46,8 @@ u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); +extern struct cxl_driver cxl_devdax_region_driver; + #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa) diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c index 0602db5f7248..391d51e5ec37 100644 --- a/drivers/cxl/core/dax_region.c +++ b/drivers/cxl/core/dax_region.c @@ -111,3 +111,19 @@ int devm_cxl_add_dax_region(struct cxl_region *cxlr, put_device(dev); return rc; } + +static int cxl_devdax_region_driver_probe(struct device *dev) +{ + struct cxl_region *cxlr = to_cxl_region(dev); + + if (cxlr->mode != CXL_PARTMODE_RAM) + return -ENODEV; + + return devm_cxl_add_dax_region(cxlr, DAXDRV_DEVICE_TYPE); +} + +struct cxl_driver cxl_devdax_region_driver = { + .name = "cxl_devdax_region", + .probe = cxl_devdax_region_driver_probe, + .id = CXL_DEVICE_REGION, +}; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 61ec939c1462..6200ca1cc2dd 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -39,8 +39,6 @@ */ static nodemask_t nodemask_region_seen = NODE_MASK_NONE; -static struct cxl_region *to_cxl_region(struct device *dev); - #define __ACCESS_ATTR_RO(_level, _name) { \ .attr = { .name = __stringify(_name), .mode = 0444 }, \ .show = _name##_access##_level##_show, \ @@ -2430,7 +2428,7 @@ bool is_cxl_region(struct device *dev) } EXPORT_SYMBOL_NS_GPL(is_cxl_region, "CXL"); -static struct cxl_region *to_cxl_region(struct device *dev) +struct cxl_region *to_cxl_region(struct device *dev) { if (dev_WARN_ONCE(dev, dev->type != &cxl_region_type, "not a cxl_region device\n")) @@ -3726,11 +3724,26 @@ static struct cxl_driver cxl_region_driver = { int cxl_region_init(void) { - return cxl_driver_register(&cxl_region_driver); + int rc; + + rc = cxl_driver_register(&cxl_region_driver); + if (rc) + return rc; + + rc = cxl_driver_register(&cxl_devdax_region_driver); + if (rc) + goto err_dax; + + return 0; + +err_dax: + cxl_driver_unregister(&cxl_region_driver); + return rc; } void cxl_region_exit(void) { + cxl_driver_unregister(&cxl_devdax_region_driver); cxl_driver_unregister(&cxl_region_driver); } diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index c06a239c0008..674d5f870c70 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -859,6 +859,7 @@ int cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds, struct cxl_endpoint_dvsec_info *info); bool is_cxl_region(struct device *dev); +struct cxl_region *to_cxl_region(struct device *dev); extern const struct bus_type cxl_bus_type; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:40 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
CXL regions may wish not to auto-configure their memory as dax kmem, but the current plumbing defaults all cxl-created dax devices to the kmem driver. This exposes them to hotplug policy, even if the user intends to use the memory as a dax device. Add plumbing to allow CXL drivers to select whether a DAX region should default to kmem (DAXDRV_KMEM_TYPE) or device (DAXDRV_DEVICE_TYPE). Add a 'dax_driver' field to struct cxl_dax_region and update devm_cxl_add_dax_region() to take a dax_driver_type parameter. In drivers/dax/cxl.c, the IORESOURCE_DAX_KMEM flag used by dax driver matching code is now set conditionally based on dax_region->dax_driver. Exports `enum dax_driver_type` to linux/dax.h for use in the cxl driver. All current callers pass DAXDRV_KMEM_TYPE for backward compatibility. Cc: John Groves <john@jagalactic.com> Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/core.h | 1 + drivers/cxl/core/region.c | 6 ++++-- drivers/cxl/cxl.h | 2 ++ drivers/dax/bus.h | 6 +----- drivers/dax/cxl.c | 6 +++++- include/linux/dax.h | 5 +++++ 6 files changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 1fb66132b777..dd987ef2def5 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -6,6 +6,7 @@ #include <cxl/mailbox.h> #include <linux/rwsem.h> +#include <linux/dax.h> extern const struct device_type cxl_nvdimm_bridge_type; extern const struct device_type cxl_nvdimm_type; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index eef5d5fe3f95..e4097c464ed3 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3450,7 +3450,8 @@ static void cxlr_dax_unregister(void *_cxlr_dax) device_unregister(&cxlr_dax->dev); } -static int devm_cxl_add_dax_region(struct cxl_region *cxlr) +static int devm_cxl_add_dax_region(struct cxl_region *cxlr, + enum dax_driver_type dax_driver) { struct cxl_dax_region *cxlr_dax; struct device *dev; @@ -3461,6 +3462,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr) return PTR_ERR(cxlr_dax); cxlr_dax->online_type = mhp_get_default_online_type(); + cxlr_dax->dax_driver = dax_driver; dev = &cxlr_dax->dev; rc = dev_set_name(dev, "dax_region%d", cxlr->id); if (rc) @@ -3994,7 +3996,7 @@ static int cxl_region_probe(struct device *dev) p->res->start, p->res->end, cxlr, is_system_ram) > 0) return 0; - return devm_cxl_add_dax_region(cxlr); + return devm_cxl_add_dax_region(cxlr, DAXDRV_KMEM_TYPE); default: dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", cxlr->mode); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 07d57d13f4c7..c06a239c0008 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -12,6 +12,7 @@ #include <linux/node.h> #include <linux/io.h> #include <linux/range.h> +#include <linux/dax.h> extern const struct nvdimm_security_ops *cxl_security_ops; @@ -592,6 +593,7 @@ struct cxl_dax_region { struct cxl_region *cxlr; struct range hpa_range; int online_type; /* MMOP_ value for kmem driver */ + enum dax_driver_type dax_driver; }; /** diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index 4ac92a4edfe7..9144593b4029 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -2,6 +2,7 @@ /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ #ifndef __DAX_BUS_H__ #define __DAX_BUS_H__ +#include <linux/dax.h> #include <linux/device.h> #include <linux/range.h> @@ -29,11 +30,6 @@ struct dev_dax_data { struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data); -enum dax_driver_type { - DAXDRV_KMEM_TYPE, - DAXDRV_DEVICE_TYPE, -}; - struct dax_device_driver { struct device_driver drv; struct list_head ids; diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 856a0cd24f3b..b13ecc2f9806 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -11,14 +11,18 @@ static int cxl_dax_region_probe(struct device *dev) struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); int nid = phys_to_target_node(cxlr_dax->hpa_range.start); struct cxl_region *cxlr = cxlr_dax->cxlr; + unsigned long flags = 0; struct dax_region *dax_region; struct dev_dax_data data; + if (cxlr_dax->dax_driver == DAXDRV_KMEM_TYPE) + flags |= IORESOURCE_DAX_KMEM; + if (nid == NUMA_NO_NODE) nid = memory_add_physaddr_to_nid(cxlr_dax->hpa_range.start); dax_region = alloc_dax_region(dev, cxlr->id, &cxlr_dax->hpa_range, nid, - PMD_SIZE, IORESOURCE_DAX_KMEM); + PMD_SIZE, flags); if (!dax_region) return -ENOMEM; diff --git a/include/linux/dax.h b/include/linux/dax.h index bf103f317cac..e62f92d0ace1 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -19,6 +19,11 @@ enum dax_access_mode { DAX_RECOVERY_WRITE, }; +enum dax_driver_type { + DAXDRV_KMEM_TYPE, + DAXDRV_DEVICE_TYPE, +}; + struct dax_operations { /* * direct_access: translate a device-relative -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:37 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Explain the binding process for sysram and daxdev regions which are explicit about which dax driver to use during region creation. Jonathan Corbet <corbet@lwn.net> Signed-off-by: Gregory Price <gourry@gourry.net> --- .../driver-api/cxl/linux/cxl-driver.rst | 43 +++++++++++++++++++ .../driver-api/cxl/linux/dax-driver.rst | 29 +++++++++++++ 2 files changed, 72 insertions(+) diff --git a/Documentation/driver-api/cxl/linux/cxl-driver.rst b/Documentation/driver-api/cxl/linux/cxl-driver.rst index dd6dd17dc536..1f857345e896 100644 --- a/Documentation/driver-api/cxl/linux/cxl-driver.rst +++ b/Documentation/driver-api/cxl/linux/cxl-driver.rst @@ -445,6 +445,49 @@ for more details. :: dax0.0 devtype modalias uevent dax_region driver subsystem +DAX regions are created when a CXL RAM region is bound to one of the +following drivers: + +* :code:`cxl_devdax_region` - Creates a dax_region for device_dax mode. + The resulting DAX device provides direct userspace access via + :code:`/dev/daxN.Y`. + +* :code:`cxl_dax_kmem_region` - Creates a dax_region for kmem mode via a + sysram_region intermediate device. See `Sysram Region`_ below. + +Sysram Region +~~~~~~~~~~~~~ +A `Sysram Region` is an intermediate device between a CXL `Memory Region` +and a `DAX Region` for kmem mode. It is created when a CXL RAM region is +bound to the :code:`cxl_sysram_region` driver. + +The sysram_region device provides an interposition point where users can +configure memory hotplug policy before the underlying dax_region is created +and memory is hotplugged to the system. + +The device hierarchy for kmem mode is:: + + regionX -> sysram_regionX -> dax_regionX -> daxX.Y + +The sysram_region exposes an :code:`online_type` attribute that controls +how memory will be onlined when the dax_kmem driver binds: + +* :code:`invalid` - Not configured (default). Blocks driver binding. +* :code:`offline` - Memory will not be onlined automatically. +* :code:`online` - Memory will be onlined in ZONE_NORMAL. +* :code:`online_movable` - Memory will be onlined in ZONE_MOVABLE. + +Example two-stage binding process:: + + # Bind region to sysram_region driver + echo region0 > /sys/bus/cxl/drivers/cxl_sysram_region/bind + + # Configure memory online type + echo online_movable > /sys/bus/cxl/devices/sysram_region0/online_type + + # Bind sysram_region to dax_kmem_region driver + echo sysram_region0 > /sys/bus/cxl/drivers/cxl_dax_kmem_region/bind + Mailbox Interfaces ------------------ A mailbox command interface for each device is exposed in :: diff --git a/Documentation/driver-api/cxl/linux/dax-driver.rst b/Documentation/driver-api/cxl/linux/dax-driver.rst index 10d953a2167b..2b8e21736292 100644 --- a/Documentation/driver-api/cxl/linux/dax-driver.rst +++ b/Documentation/driver-api/cxl/linux/dax-driver.rst @@ -17,6 +17,35 @@ The DAX subsystem exposes this ability through the `cxl_dax_region` driver. A `dax_region` provides the translation between a CXL `memory_region` and a `DAX Device`. +CXL DAX Region Drivers +====================== +CXL provides multiple drivers for creating DAX regions, each suited for +different use cases: + +cxl_devdax_region +----------------- +The :code:`cxl_devdax_region` driver creates a dax_region configured for +device_dax mode. When a CXL RAM region is bound to this driver, the +resulting DAX device provides direct userspace access via :code:`/dev/daxN.Y`. + +Device hierarchy:: + + regionX -> dax_regionX -> daxX.Y + +This is the simplest path for applications that want to manage CXL memory +directly from userspace. + +cxl_dax_kmem_region +------------------- +For kmem mode, CXL provides a two-stage binding process that allows users +to configure memory hotplug policy before memory is added to the system. + +The :code:`cxl_dax_kmem_region` driver then binds a sysram_region +device and creates a dax_region configured for kmem mode. + +The :code:`online_type` policy will be passed from sysram_region to +the dax kmem driver for use when hotplugging the memory. + DAX Device ========== A `DAX Device` is a file-like interface exposed in :code:`/dev/daxN.Y`. A -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:42 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
In the current kmem driver binding process, the only way for users to define hotplug policy is via a build-time option, or by not onlining memory by default and setting each individual memory block online after hotplug occurs. We can solve this with a configuration step between region-probe and dax-probe. Add the infrastructure for a two-stage driver binding for kmem-mode dax regions. The cxl_dax_kmem_region driver probes cxl_sysram_region devices and creates cxl_dax_region with dax_driver=kmem. This creates an interposition step where users can configure policy. Device hierarchy: region0 -> sysram_region0 -> dax_region0 -> dax0.0 The sysram_region device exposes a sysfs 'online_type' attribute that allows users to configure the memory online type before the underlying dax_region is created and memory is hotplugged. sysram_region0/online_type: invalid: not configured, blocks probe offline: memory will not be onlined automatically online: memory will be onlined in ZONE_NORMAL online_movable: memory will be onlined in ZONE_MMOVABLE The device initializes with online_type=invalid which prevents the cxl_dax_kmem_region driver from binding until the user explicitly configures a valid online_type. This enables a two-step binding process: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind Signed-off-by: Gregory Price <gourry@gourry.net> --- Documentation/ABI/testing/sysfs-bus-cxl | 21 +++ drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 6 + drivers/cxl/core/dax_region.c | 50 +++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 14 ++ drivers/cxl/core/sysram_region.c | 180 ++++++++++++++++++++++++ drivers/cxl/cxl.h | 25 ++++ 8 files changed, 299 insertions(+) create mode 100644 drivers/cxl/core/sysram_region.c diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl index c80a1b5a03db..a051cb86bdfc 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -624,3 +624,24 @@ Description: The count is persistent across power loss and wraps back to 0 upon overflow. If this file is not present, the device does not have the necessary support for dirty tracking. + + +What: /sys/bus/cxl/devices/sysram_regionZ/online_type +Date: January, 2026 +KernelVersion: v7.1 +Contact: linux-cxl@vger.kernel.org +Description: + (RW) This attribute allows users to configure the memory online + type before the underlying dax_region engages in hotplug. + + Valid values: + 'invalid': Not configured (default). Blocks probe. + 'offline': Memory will not be onlined automatically. + 'online' : Memory will be onlined in ZONE_NORMAL. + 'online_movable': Memory will be onlined in ZONE_MOVABLE. + + The device initializes with online_type='invalid' which prevents + the cxl_dax_kmem_region driver from binding until the user + explicitly configures a valid online_type. This enables a + two-step binding process that gives users control over memory + hotplug policy before memory is added to the system. diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 36f284d7c500..faf662c7d88b 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -18,6 +18,7 @@ cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o cxl_core-$(CONFIG_CXL_REGION) += dax_region.o +cxl_core-$(CONFIG_CXL_REGION) += sysram_region.o cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index ea4df8abc2ad..04b32015e9b1 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -26,6 +26,7 @@ extern struct device_attribute dev_attr_delete_region; extern struct device_attribute dev_attr_region; extern const struct device_type cxl_pmem_region_type; extern const struct device_type cxl_dax_region_type; +extern const struct device_type cxl_sysram_region_type; extern const struct device_type cxl_region_type; int cxl_decoder_detach(struct cxl_region *cxlr, @@ -37,6 +38,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr, #define SET_CXL_REGION_ATTR(x) (&dev_attr_##x.attr), #define CXL_PMEM_REGION_TYPE(x) (&cxl_pmem_region_type) #define CXL_DAX_REGION_TYPE(x) (&cxl_dax_region_type) +#define CXL_SYSRAM_REGION_TYPE(x) (&cxl_sysram_region_type) int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); @@ -44,9 +46,12 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); +int devm_cxl_add_sysram_region(struct cxl_region *cxlr); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); extern struct cxl_driver cxl_devdax_region_driver; +extern struct cxl_driver cxl_dax_kmem_region_driver; +extern struct cxl_driver cxl_sysram_region_driver; #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, @@ -81,6 +86,7 @@ static inline void cxl_region_exit(void) #define SET_CXL_REGION_ATTR(x) #define CXL_PMEM_REGION_TYPE(x) NULL #define CXL_DAX_REGION_TYPE(x) NULL +#define CXL_SYSRAM_REGION_TYPE(x) NULL #endif struct cxl_send_command; diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c index 391d51e5ec37..a379f5b85e3d 100644 --- a/drivers/cxl/core/dax_region.c +++ b/drivers/cxl/core/dax_region.c @@ -127,3 +127,53 @@ struct cxl_driver cxl_devdax_region_driver = { .probe = cxl_devdax_region_driver_probe, .id = CXL_DEVICE_REGION, }; + +static int cxl_dax_kmem_region_driver_probe(struct device *dev) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + struct cxl_dax_region *cxlr_dax; + struct cxl_region *cxlr; + int rc; + + if (!cxlr_sysram) + return -ENODEV; + + /* Require explicit online_type configuration before binding */ + if (cxlr_sysram->online_type == -1) + return -ENODEV; + + cxlr = cxlr_sysram->cxlr; + + cxlr_dax = cxl_dax_region_alloc(cxlr); + if (IS_ERR(cxlr_dax)) + return PTR_ERR(cxlr_dax); + + /* Inherit online_type from parent sysram_region */ + cxlr_dax->online_type = cxlr_sysram->online_type; + cxlr_dax->dax_driver = DAXDRV_KMEM_TYPE; + + /* Parent is the sysram_region device */ + cxlr_dax->dev.parent = dev; + + rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(&cxlr_dax->dev); + if (rc) + goto err; + + dev_dbg(dev, "%s: register %s\n", dev_name(dev), + dev_name(&cxlr_dax->dev)); + + return devm_add_action_or_reset(dev, cxlr_dax_unregister, cxlr_dax); +err: + put_device(&cxlr_dax->dev); + return rc; +} + +struct cxl_driver cxl_dax_kmem_region_driver = { + .name = "cxl_dax_kmem_region", + .probe = cxl_dax_kmem_region_driver_probe, + .id = CXL_DEVICE_SYSRAM_REGION, +}; diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 3310dbfae9d6..dc7262a5efd6 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -66,6 +66,8 @@ static int cxl_device_id(const struct device *dev) return CXL_DEVICE_PMEM_REGION; if (dev->type == CXL_DAX_REGION_TYPE()) return CXL_DEVICE_DAX_REGION; + if (dev->type == CXL_SYSRAM_REGION_TYPE()) + return CXL_DEVICE_SYSRAM_REGION; if (is_cxl_port(dev)) { if (is_cxl_root(to_cxl_port(dev))) return CXL_DEVICE_ROOT; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 6200ca1cc2dd..8bef91dc726c 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3734,8 +3734,20 @@ int cxl_region_init(void) if (rc) goto err_dax; + rc = cxl_driver_register(&cxl_sysram_region_driver); + if (rc) + goto err_sysram; + + rc = cxl_driver_register(&cxl_dax_kmem_region_driver); + if (rc) + goto err_dax_kmem; + return 0; +err_dax_kmem: + cxl_driver_unregister(&cxl_sysram_region_driver); +err_sysram: + cxl_driver_unregister(&cxl_devdax_region_driver); err_dax: cxl_driver_unregister(&cxl_region_driver); return rc; @@ -3743,6 +3755,8 @@ int cxl_region_init(void) void cxl_region_exit(void) { + cxl_driver_unregister(&cxl_dax_kmem_region_driver); + cxl_driver_unregister(&cxl_sysram_region_driver); cxl_driver_unregister(&cxl_devdax_region_driver); cxl_driver_unregister(&cxl_region_driver); } diff --git a/drivers/cxl/core/sysram_region.c b/drivers/cxl/core/sysram_region.c new file mode 100644 index 000000000000..5665db238d0f --- /dev/null +++ b/drivers/cxl/core/sysram_region.c @@ -0,0 +1,180 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2026 Meta Platforms, Inc. All rights reserved. */ +/* + * CXL Sysram Region - Intermediate device for kmem hotplug configuration + * + * This provides an intermediate device between cxl_region and cxl_dax_region + * that allows users to configure memory hotplug parameters (like online_type) + * before the underlying dax_region is created and memory is hotplugged. + */ + +#include <linux/memory_hotplug.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_sysram_region_release(struct device *dev) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + kfree(cxlr_sysram); +} + +static ssize_t online_type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + switch (cxlr_sysram->online_type) { + case MMOP_OFFLINE: + return sysfs_emit(buf, "offline\n"); + case MMOP_ONLINE: + return sysfs_emit(buf, "online\n"); + case MMOP_ONLINE_MOVABLE: + return sysfs_emit(buf, "online_movable\n"); + default: + return sysfs_emit(buf, "invalid\n"); + } +} + +static ssize_t online_type_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + if (sysfs_streq(buf, "offline")) + cxlr_sysram->online_type = MMOP_OFFLINE; + else if (sysfs_streq(buf, "online")) + cxlr_sysram->online_type = MMOP_ONLINE; + else if (sysfs_streq(buf, "online_movable")) + cxlr_sysram->online_type = MMOP_ONLINE_MOVABLE; + else + return -EINVAL; + + return len; +} + +static DEVICE_ATTR_RW(online_type); + +static struct attribute *cxl_sysram_region_attrs[] = { + &dev_attr_online_type.attr, + NULL, +}; + +static const struct attribute_group cxl_sysram_region_attribute_group = { + .attrs = cxl_sysram_region_attrs, +}; + +static const struct attribute_group *cxl_sysram_region_attribute_groups[] = { + &cxl_base_attribute_group, + &cxl_sysram_region_attribute_group, + NULL, +}; + +const struct device_type cxl_sysram_region_type = { + .name = "cxl_sysram_region", + .release = cxl_sysram_region_release, + .groups = cxl_sysram_region_attribute_groups, +}; + +static bool is_cxl_sysram_region(struct device *dev) +{ + return dev->type == &cxl_sysram_region_type; +} + +struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_sysram_region(dev), + "not a cxl_sysram_region device\n")) + return NULL; + return container_of(dev, struct cxl_sysram_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_sysram_region, "CXL"); + +static struct lock_class_key cxl_sysram_region_key; + +static struct cxl_sysram_region *cxl_sysram_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_sysram_region *cxlr_sysram; + struct device *dev; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return ERR_PTR(-ENXIO); + + cxlr_sysram = kzalloc(sizeof(*cxlr_sysram), GFP_KERNEL); + if (!cxlr_sysram) + return ERR_PTR(-ENOMEM); + + cxlr_sysram->hpa_range.start = p->res->start; + cxlr_sysram->hpa_range.end = p->res->end; + cxlr_sysram->online_type = -1; /* Require explicit configuration */ + + dev = &cxlr_sysram->dev; + cxlr_sysram->cxlr = cxlr; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_sysram_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_sysram_region_type; + + return cxlr_sysram; +} + +static void cxlr_sysram_unregister(void *_cxlr_sysram) +{ + struct cxl_sysram_region *cxlr_sysram = _cxlr_sysram; + + device_unregister(&cxlr_sysram->dev); +} + +int devm_cxl_add_sysram_region(struct cxl_region *cxlr) +{ + struct cxl_sysram_region *cxlr_sysram; + struct device *dev; + int rc; + + cxlr_sysram = cxl_sysram_region_alloc(cxlr); + if (IS_ERR(cxlr_sysram)) + return PTR_ERR(cxlr_sysram); + + dev = &cxlr_sysram->dev; + rc = dev_set_name(dev, "sysram_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + return devm_add_action_or_reset(&cxlr->dev, cxlr_sysram_unregister, + cxlr_sysram); +err: + put_device(dev); + return rc; +} + +static int cxl_sysram_region_driver_probe(struct device *dev) +{ + struct cxl_region *cxlr = to_cxl_region(dev); + + /* Only handle RAM regions */ + if (cxlr->mode != CXL_PARTMODE_RAM) + return -ENODEV; + + return devm_cxl_add_sysram_region(cxlr); +} + +struct cxl_driver cxl_sysram_region_driver = { + .name = "cxl_sysram_region", + .probe = cxl_sysram_region_driver_probe, + .id = CXL_DEVICE_REGION, +}; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 674d5f870c70..1544c27e9c89 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -596,6 +596,25 @@ struct cxl_dax_region { enum dax_driver_type dax_driver; }; +/** + * struct cxl_sysram_region - CXL RAM region for system memory hotplug + * @dev: device for this sysram_region + * @cxlr: parent cxl_region + * @hpa_range: Host physical address range for the region + * @online_type: Memory online type (MMOP_* 0-3, or -1 if not configured) + * + * Intermediate device that allows configuration of memory hotplug + * parameters before the underlying dax_region is created. The device + * starts with online_type=-1 which prevents the cxl_dax_kmem_region + * driver from binding until the user explicitly sets online_type. + */ +struct cxl_sysram_region { + struct device dev; + struct cxl_region *cxlr; + struct range hpa_range; + int online_type; +}; + /** * struct cxl_port - logical collection of upstream port devices and * downstream port devices to construct a CXL memory @@ -890,6 +909,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv); #define CXL_DEVICE_PMEM_REGION 7 #define CXL_DEVICE_DAX_REGION 8 #define CXL_DEVICE_PMU 9 +#define CXL_DEVICE_SYSRAM_REGION 10 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") #define CXL_MODALIAS_FMT "cxl:t%d" @@ -907,6 +927,7 @@ bool is_cxl_pmem_region(struct device *dev); struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev); int cxl_add_to_region(struct cxl_endpoint_decoder *cxled); struct cxl_dax_region *to_cxl_dax_region(struct device *dev); +struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev); u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa); #else static inline bool is_cxl_pmem_region(struct device *dev) @@ -925,6 +946,10 @@ static inline struct cxl_dax_region *to_cxl_dax_region(struct device *dev) { return NULL; } +static inline struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev) +{ + return NULL; +} static inline u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa) { -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:41 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Annoyingly, my email client has been truncating my titles: cxl: explicit DAX driver selection and hotplug policy for CXL regions ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:17:55 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, Jan 29, 2026 at 04:04:33PM -0500, Gregory Price wrote: Looks like build regression on configs without hotplug MMOP_ defines and mhp_get_default_online_type() undefined Will let this version sit for a bit before spinning a v2 ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 30 Jan 2026 12:34:33 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On 1/29/2026 3:04 PM, Gregory Price wrote: This technically comes up in the devdax_region driver patch first, but I noticed it here so this is where I'm putting it: I like the idea here, but the implementation is all off. Firstly, devm_cxl_add_sysram_region() is never called outside of sysram_region_driver::probe(), so I'm not sure how they ever get added to the system (same with devdax regions). Second, there's this weird pattern of adding sub-region (sysram, devdax, etc.) devices being added inside of the sub-region driver probe. I would expect the devices are added then the probe function is called. What I think should be going on here (and correct me if I'm wrong) is: 1. a cxl_region device is added to the system 2. cxl_region::probe() is called on said device (one in cxl/core/region.c) 3. Said probe function figures out the device is a dax_region or whatever else and creates that type of region device (i.e. cxl_region::probe() -> device_add(&cxl_sysram_device)) 4. if the device's dax driver type is DAXDRV_DEVICE_TYPE it gets sent to the daxdev_region driver 5a. if the device's dax driver type is DAXDRV_KMEM_TYPE it gets sent to the sysram_region driver which holds it until the online_type is set 5b. Once the online_type is set, the device is forwarded to the dax_kmem_region driver? Not sure on this part What seems to be happening is that the cxl_region is added, all of these region drivers try to bind to it since they all use the same device id (CXL_DEVICE_REGION) and the correct one is figured out by magic? I'm somewhat confused at this point :/. This should be removed from the valid values section since it's not a valid value to write to the attribute. The mention of the default in the paragraph below should be enough. You can use cleanup.h here to remove the goto's (I think). Following should work: #DEFINE_FREE(cxlr_dax_region_put, struct cxl_dax_region *, if (!IS_ERR_OR_NULL(_T)) put_device(&cxlr_dax->dev)) static int cxl_dax_kmem_region_driver_probe(struct device *dev) { ... struct cxl_dax_region *cxlr_dax __free(cxlr_dax_region_put) = cxl_dax_region_alloc(cxlr); if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); ... rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id); if (rc) return rc; rc = device_add(&cxlr_dax->dev); if (rc) return rc; dev_dbg(dev, "%s: register %s\n", dev_name(dev), dev_name(&cxlr_dax->dev)); return devm_add_action_or_reset(dev, cxlr_dax_unregister, no_free_ptr(cxlr_dax)); } Same thing as above Thanks, Ben
{ "author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>", "date": "Fri, 30 Jan 2026 15:27:12 -0600", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Fri, Jan 30, 2026 at 03:27:12PM -0600, Cheatham, Benjamin wrote: I originally tried doing with region0/region_driver, but that design pattern is also confusing - and it creates differently bad patterns. echo region0 > decoder0.0/create_ram_region -> creates region0 # Current pattern echo region > driver/region/probe /* auto-region behavior */ # region_driver attribute pattern echo "sysram" > region0/region_driver echo region0 > driver/region/probe /* uses sysram region driver */ https://lore.kernel.org/linux-cxl/20260113202138.3021093-1-gourry@gourry.net/ Ira pointed out that this design makes the "implicit" design of the driver worse. The user doesn't actually know what driver is being used under the hood - it just knows something is being used. This at least makes it explicit which driver is being used - and splits the uses-case logic up into discrete drivers (dax users don't have to worry about sysram users breaking their stuff). If it makes more sense, you could swap the ordering of the names echo region0 > region/bind echo region0 > region_sysram/bind echo region0 > region_daxdev/bind echo region0 > region_dax_kmem/bind echo region0 > region_pony/bind --- The underlying issue is that region::probe() is trying to be a god-function for every possible use case, and hiding the use case behind an attribute vs a driver is not good. (also the default behavior for region::probe() in an otherwise unconfigured region is required for backwards compatibility) For auto-regions: region_probe() eats it and you get the default behavior. For non-auto regions: create_x_region generates an un-configured region and fails to probe until the user commits it and probes it. auto-regions are evil and should be discouraged. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 30 Jan 2026 17:12:50 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On 1/30/2026 4:12 PM, Gregory Price wrote: Ok, that makes sense. I think I just got lost in the sauce while looking at this last week and this explanation helped a lot.> I think this was the source of my misunderstanding. I was trying to understand how it works for auto regions when it's never meant to apply to them. Sorry if this is a stupid question, but what stops auto regions from binding to the sysram/dax region drivers? They all bind to region devices, so I assume there's something keeping them from binding before the core region driver gets a chance. Thanks, Ben
{ "author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>", "date": "Mon, 2 Feb 2026 11:02:37 -0600", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:34 -0500 Gregory Price <gourry@gourry.net> wrote: Trivial comment inline. I don't really care either way. Pushing the policy up to the caller and ensuring it's explicitly constant for all the memory blocks (as opposed to relying on locks) seems sensible to me even without anything else. Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Maybe move the local variable outside the loop to avoid the double call.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:10:29 +0000", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:35 -0500 Gregory Price <gourry@gourry.net> wrote: Hi Gregory, I think maybe I'd have left the export for the first user outside of memory_hotplug.c. Not particularly important however. Maybe talk about why a caller of __add_memory_driver_managed() might want the default? Feels like that's for the people who don't... Or is this all a dance to avoid an if (special mode) __add_memory_driver_managed(); else add_memory_driver_managed(); ? Other comments are mostly about using a named enum. I'm not sure if there is some existing reason why that doesn't work? -Errno pushed through this variable or anything like that? Given online_type values are from an enum anyway, maybe we can name that enum and use it explicitly? Ah. Fair enough, ignore comment in previous patch. I should have read on... It's a little odd to add nice kernel-doc formatted documentation when the non __ variant has free form docs. Maybe tidy that up first if we want to go kernel-doc in this file? (I'm in favor, but no idea on general feelings...) Given that's currently the full set, seems like enum wins out here over an int. This is where using an enum would help compiler know what is going on and maybe warn if anyone writes something that isn't defined.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:25:24 +0000", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 11:02:37AM -0600, Cheatham, Benjamin wrote: Auto regions explicitly use the dax_kmem path (all existing code, unchanged)- which auto-plugs into dax/hotplug. I do get what you're saying that everything binds on a region type, I will look a little closer at this and see if there's something more reasonable we can do. I think i can update `region/bind` to use the sysram driver with online_type=mhp_default_online_type so you'd end up with effective the auto-region logic: cxlcli create-region -m ram ... existing argument set ------ echo region0 > create_ram_region /* program decoders */ echo region0 > region/bind /* * region_bind(): * 1) alloc sysram_region object * 2) sysram_regionN->online_type=mhp_default_online_type() * 3) add device to bus * 4) device auto-probes all the way down to dax * 5) dax auto-onlines with system default setting */ ------ and Non-auto-region logic (approximation) cxlcli creation-region -m ram --type sysram --online-type=movable ----- echo region0 > create_ram_region /* program decoders */ echo region0 > sysram/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > dax_kmem/bind ----- I want to retain the dax_kmem driver because there may be multiple users other than sysram. For example, a compressed memory region wants to utilize dax_kmem, but has its own complex policy (via N_MEMORY_PRIVATE) so it doesn't want to abstract through sysram_region, but it does want to abstract through dax_kmem. weeeee "software defined memory" weeeee ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 12:41:31 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 05:10:29PM +0000, Jonathan Cameron wrote: ack. will update for next version w/ Ben's notes and the build fix. Thanks! ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 12:46:25 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:37 -0500 Gregory Price <gourry@gourry.net> wrote: LGTM Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:54:17 +0000", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:38 -0500 Gregory Price <gourry@gourry.net> wrote: Needs to answer the question: Why? Minor stuff inline. Maybe sneak in dropping that trailing comma whilst you are moving it. ... Bonus line...
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:56:40 +0000", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:39 -0500 Gregory Price <gourry@gourry.net> wrote: Likewise. Why?
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:57:11 +0000", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 05:25:24PM +0000, Jonathan Cameron wrote: Less about why they want the default, more about maintaining backward compatibility. In the cxl driver, Ben pointed out something that made me realize we can change `region/bind()` to actually use the new `sysram/bind` path by just adding a one line `sysram_regionN->online_type = default()` I can add this detail to the changelog. I can add a cleanup-patch prior to use the enum, but i don't think this actually enables the compiler to do anything new at the moment? An enum just resolves to an int, and setting `enum thing val = -1` when the enum definition doesn't include -1 doesn't actually fire any errors (at least IIRC - maybe i'm just wrong). Same with function(enum) -> function(-1) wouldn't fire a compilation error It might actually be worth adding `MMOP_NOT_CONFIGURED = -1` so that the cxl-sysram driver can set this explicitly rather than just setting -1 as an implicit version of this - but then why would memory_hotplug.c ever want to expose a NOT_CONFIGURED option lol. So, yeah, the enum looks nicer, but not sure how much it buys us beyond that. ack. Can add some more cleanups early in the series. I think you still have to sanity check this, but maybe the code looks cleaner, so will do. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 13:02:10 -0500", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:41 -0500 Gregory Price <gourry@gourry.net> wrote: ZONE_MOVABLE Trivial stuff. Will mull over this series as a whole... My first instinctive reaction is positive - I'm just wondering where additional drivers fit into this and whether it has the right degree of flexibility. This smells like a loop over an array of drivers is becoming sensible. As below. Trivial, but don't want a comma on that NULL. Ah. An there's our reason for an int. Can we just add a MMOP enum value for not configured yet and so let us use it as an enum? Or have a separate bool for that and ignore the online_type until it's set.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 18:20:15 +0000", "thread_id": "20260202175711.000021d4@huawei.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
The register space described by DT node of compatible mediatek,mt8365-infracfg-nao exposes a variety of unrelated registers, including registers for controlling bus protection on the MT8365 SoC, which is used by the power domain controller through a syscon. Add this compatible to the syscon binding. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/mfd/syscon.yaml b/Documentation/devicetree/bindings/mfd/syscon.yaml index c6bbb19c3e3e2245b4a823df06e7f361da311000..f655ec18cc2d96028d17e19d704b62f6d898fea4 100644 --- a/Documentation/devicetree/bindings/mfd/syscon.yaml +++ b/Documentation/devicetree/bindings/mfd/syscon.yaml @@ -190,6 +190,7 @@ properties: - mediatek,mt8135-pctl-a-syscfg - mediatek,mt8135-pctl-b-syscfg - mediatek,mt8173-pctl-a-syscfg + - mediatek,mt8365-infracfg-nao - mediatek,mt8365-syscfg - microchip,lan966x-cpu-syscon - microchip,mpfs-sysreg-scb -- 2.49.0
{ "author": "=?utf-8?q?N=C3=ADcolas_F=2E_R=2E_A=2E_Prado?= <nfraprado@collabora.com>", "date": "Fri, 02 May 2025 12:43:21 -0400", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
The infracfg-nao register space at 0x1020e000 has different registers than the infracfg space at 0x10001000, and most importantly, doesn't contain any clock controls. Therefore it shouldn't use the same compatible used for the mt8365 infracfg clocks driver: mediatek,mt8365-infracfg. Since it currently does, probe errors are reported in the kernel logs: [ 0.245959] Failed to register clk ifr_pmic_tmr: -EEXIST [ 0.245998] clk-mt8365 1020e000.infracfg: probe with driver clk-mt8365 failed with error -17 This register space is used only as a syscon for bus control by the power domain controller, so in order to properly describe it and fix the errors, set its compatible to a distinct compatible used exclusively as a syscon, drop the clock-cells, and while at it rename the node to 'syscon' following the naming convention. Fixes: 6ff945376556 ("arm64: dts: mediatek: Initial mt8365-evk support") Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/boot/dts/mediatek/mt8365.dtsi b/arch/arm64/boot/dts/mediatek/mt8365.dtsi index e6d2b3221a3b7a855129258b379ae4bc2fd05449..49ad4dee9c4cf563743dc55d5e0b055cfb69986a 100644 --- a/arch/arm64/boot/dts/mediatek/mt8365.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8365.dtsi @@ -495,10 +495,9 @@ iommu: iommu@10205000 { #iommu-cells = <1>; }; - infracfg_nao: infracfg@1020e000 { - compatible = "mediatek,mt8365-infracfg", "syscon"; + infracfg_nao: syscon@1020e000 { + compatible = "mediatek,mt8365-infracfg-nao", "syscon"; reg = <0 0x1020e000 0 0x1000>; - #clock-cells = <1>; }; rng: rng@1020f000 { -- 2.49.0
{ "author": "=?utf-8?q?N=C3=ADcolas_F=2E_R=2E_A=2E_Prado?= <nfraprado@collabora.com>", "date": "Fri, 02 May 2025 12:43:22 -0400", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
Il 02/05/25 18:43, Nícolas F. R. A. Prado ha scritto: Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
{ "author": "AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>", "date": "Tue, 6 May 2025 10:26:48 +0200", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
Il 02/05/25 18:43, Nícolas F. R. A. Prado ha scritto: Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
{ "author": "AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>", "date": "Tue, 6 May 2025 10:26:49 +0200", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
On Fri, May 02, 2025 at 12:43:21PM -0400, Ncolas F. R. A. Prado wrote: Acked-by: Conor Dooley <conor.dooley@microchip.com>
{ "author": "Conor Dooley <conor@kernel.org>", "date": "Tue, 6 May 2025 17:30:22 +0100", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
On Fri, 02 May 2025 12:43:21 -0400, Nícolas F. R. A. Prado wrote: Applied, thanks! [1/2] dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao commit: cbb005b91726ea1024b6261bc1062bac19f6d059 -- Lee Jones [李琼斯]
{ "author": "Lee Jones <lee@kernel.org>", "date": "Tue, 13 May 2025 10:48:51 +0100", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/2] Correct MT8365's infracfg-nao DT node description as a pure syscon
Introduce a new compatible to the binding and use it in the infracfg-nao node in the mt8365.dtsi to correctly describe the node and prevent probe errors. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> --- Nícolas F. R. A. Prado (2): dt-bindings: mfd: syscon: Add mediatek,mt8365-infracfg-nao arm64: dts: mediatek: mt8365: Describe infracfg-nao as a pure syscon Documentation/devicetree/bindings/mfd/syscon.yaml | 1 + arch/arm64/boot/dts/mediatek/mt8365.dtsi | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) --- base-commit: 37ff6e9a2ce321b7932d3987701757fb4d87b0e6 change-id: 20250502-mt8365-infracfg-nao-compatible-46d4db7f54f7 Best regards, -- Nícolas F. R. A. Prado <nfraprado@collabora.com>
On 5/2/25 11:43 AM, Nícolas F. R. A. Prado wrote: Reviewed-by: David Lechner <dlechner@baylibre.com> It looks like this never got picked up. I noticed this was a problem in U-Boot because it was registering this as a clock provider. And I sent a similar patch [1] recently that has also not been acted on yet. I prefer this patch since it also fixes the node name to use a standard name. Who should be responsible for actually picking up the patch? [1]: https://lore.kernel.org/linux-mediatek/20251216-mtk-fix-infracfg_nao-compatibile-v1-1-d339b151ac81@baylibre.com/
{ "author": "David Lechner <dlechner@baylibre.com>", "date": "Mon, 2 Feb 2026 11:20:30 -0600", "thread_id": "20250502-mt8365-infracfg-nao-compatible-v1-0-e40394573f98@collabora.com.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
jbd2_inode fields are updated under journal->j_list_lock, but some paths read them without holding the lock (e.g. fast commit helpers and the ordered truncate fast path). Use READ_ONCE() for these lockless reads to correct the concurrency assumptions. Suggested-by: Jan Kara <jack@suse.com> Signed-off-by: Li Chen <me@linux.beauty> --- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- 2 files changed, 33 insertions(+), 8 deletions(-) diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c index 7203d2d2624d..3347d75da2f8 100644 --- a/fs/jbd2/commit.c +++ b/fs/jbd2/commit.c @@ -180,7 +180,13 @@ static int journal_wait_on_commit_record(journal_t *journal, /* Send all the data buffers related to an inode */ int jbd2_submit_inode_data(journal_t *journal, struct jbd2_inode *jinode) { - if (!jinode || !(jinode->i_flags & JI_WRITE_DATA)) + unsigned long flags; + + if (!jinode) + return 0; + + flags = READ_ONCE(jinode->i_flags); + if (!(flags & JI_WRITE_DATA)) return 0; trace_jbd2_submit_inode_data(jinode->i_vfs_inode); @@ -191,12 +197,30 @@ EXPORT_SYMBOL(jbd2_submit_inode_data); int jbd2_wait_inode_data(journal_t *journal, struct jbd2_inode *jinode) { - if (!jinode || !(jinode->i_flags & JI_WAIT_DATA) || - !jinode->i_vfs_inode || !jinode->i_vfs_inode->i_mapping) + struct address_space *mapping; + struct inode *inode; + unsigned long flags; + loff_t start, end; + + if (!jinode) + return 0; + + flags = READ_ONCE(jinode->i_flags); + if (!(flags & JI_WAIT_DATA)) + return 0; + + inode = READ_ONCE(jinode->i_vfs_inode); + if (!inode) + return 0; + + mapping = inode->i_mapping; + start = READ_ONCE(jinode->i_dirty_start); + end = READ_ONCE(jinode->i_dirty_end); + + if (!mapping) return 0; return filemap_fdatawait_range_keep_errors( - jinode->i_vfs_inode->i_mapping, jinode->i_dirty_start, - jinode->i_dirty_end); + mapping, start, end); } EXPORT_SYMBOL(jbd2_wait_inode_data); @@ -240,10 +264,11 @@ static int journal_submit_data_buffers(journal_t *journal, int jbd2_journal_finish_inode_data_buffers(struct jbd2_inode *jinode) { struct address_space *mapping = jinode->i_vfs_inode->i_mapping; + loff_t start = READ_ONCE(jinode->i_dirty_start); + loff_t end = READ_ONCE(jinode->i_dirty_end); return filemap_fdatawait_range_keep_errors(mapping, - jinode->i_dirty_start, - jinode->i_dirty_end); + start, end); } /* diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index dca4b5d8aaaa..302b2090eea7 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -2739,7 +2739,7 @@ int jbd2_journal_begin_ordered_truncate(journal_t *journal, int ret = 0; /* This is a quick check to avoid locking if not necessary */ - if (!jinode->i_transaction) + if (!READ_ONCE(jinode->i_transaction)) goto out; /* Locks are here just to force reading of recent values, it is * enough that the transaction was not committing before we started -- 2.52.0
{ "author": "Li Chen <me@linux.beauty>", "date": "Fri, 30 Jan 2026 11:12:30 +0800", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
ext4 journal commit callbacks access jbd2_inode fields such as i_transaction and i_dirty_start/end without holding journal->j_list_lock. Use READ_ONCE() for these reads to correct the concurrency assumptions. Suggested-by: Jan Kara <jack@suse.com> Signed-off-by: Li Chen <me@linux.beauty> --- fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- 2 files changed, 12 insertions(+), 7 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index d99296d7315f..2d451388e080 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3033,11 +3033,13 @@ static int ext4_writepages(struct address_space *mapping, int ext4_normal_submit_inode_data_buffers(struct jbd2_inode *jinode) { + loff_t dirty_start = READ_ONCE(jinode->i_dirty_start); + loff_t dirty_end = READ_ONCE(jinode->i_dirty_end); struct writeback_control wbc = { .sync_mode = WB_SYNC_ALL, .nr_to_write = LONG_MAX, - .range_start = jinode->i_dirty_start, - .range_end = jinode->i_dirty_end, + .range_start = dirty_start, + .range_end = dirty_end, }; struct mpage_da_data mpd = { .inode = jinode->i_vfs_inode, diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 5cf6c2b54bbb..acb2bc016fd4 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -521,6 +521,7 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode, { struct buffer_head *bh, *head; struct journal_head *jh; + transaction_t *trans = READ_ONCE(jinode->i_transaction); bh = head = folio_buffers(folio); do { @@ -539,7 +540,7 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode, */ jh = bh2jh(bh); if (buffer_dirty(bh) || - (jh && (jh->b_transaction != jinode->i_transaction || + (jh && (jh->b_transaction != trans || jh->b_next_transaction))) return true; } while ((bh = bh->b_this_page) != head); @@ -550,12 +551,14 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode, static int ext4_journalled_submit_inode_data_buffers(struct jbd2_inode *jinode) { struct address_space *mapping = jinode->i_vfs_inode->i_mapping; + loff_t dirty_start = READ_ONCE(jinode->i_dirty_start); + loff_t dirty_end = READ_ONCE(jinode->i_dirty_end); struct writeback_control wbc = { - .sync_mode = WB_SYNC_ALL, + .sync_mode = WB_SYNC_ALL, .nr_to_write = LONG_MAX, - .range_start = jinode->i_dirty_start, - .range_end = jinode->i_dirty_end, - }; + .range_start = dirty_start, + .range_end = dirty_end, + }; struct folio *folio = NULL; int error; -- 2.52.0
{ "author": "Li Chen <me@linux.beauty>", "date": "Fri, 30 Jan 2026 11:12:31 +0800", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
ocfs2 journal commit callback reads jbd2_inode dirty range fields without holding journal->j_list_lock. Use READ_ONCE() for these reads to correct the concurrency assumptions. Suggested-by: Jan Kara <jack@suse.com> Signed-off-by: Li Chen <me@linux.beauty> --- fs/ocfs2/journal.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c index 85239807dec7..7032284cdbd6 100644 --- a/fs/ocfs2/journal.c +++ b/fs/ocfs2/journal.c @@ -902,8 +902,11 @@ int ocfs2_journal_alloc(struct ocfs2_super *osb) static int ocfs2_journal_submit_inode_data_buffers(struct jbd2_inode *jinode) { - return filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping, - jinode->i_dirty_start, jinode->i_dirty_end); + struct address_space *mapping = jinode->i_vfs_inode->i_mapping; + loff_t dirty_start = READ_ONCE(jinode->i_dirty_start); + loff_t dirty_end = READ_ONCE(jinode->i_dirty_end); + + return filemap_fdatawrite_range(mapping, dirty_start, dirty_end); } int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty) -- 2.52.0
{ "author": "Li Chen <me@linux.beauty>", "date": "Fri, 30 Jan 2026 11:12:32 +0800", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
On Fri, Jan 30, 2026 at 11:12:32AM +0800, Li Chen wrote: I don't think this is the right solution to the problem. If it is, there needs to be much better argumentation in the commit message. As I understand it, jbd2_journal_file_inode() initialises jinode, then adds it to the t_inode_list, then drops the j_list_lock. So the actual problem we need to address is that there's no memory barrier between the store to i_dirty_start and the list_add(). Once that's added, there's no need for a READ_ONCE here. Or have I misunderstood the problem?
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 30 Jan 2026 05:27:59 +0000", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
Hi Matthew, > On Fri, Jan 30, 2026 at 11:12:32AM +0800, Li Chen wrote: > > ocfs2 journal commit callback reads jbd2_inode dirty range fields without > > holding journal->j_list_lock. > > > > Use READ_ONCE() for these reads to correct the concurrency assumptions. > > I don't think this is the right solution to the problem. If it is, > there needs to be much better argumentation in the commit message. > > As I understand it, jbd2_journal_file_inode() initialises jinode, > then adds it to the t_inode_list, then drops the j_list_lock. So the > actual problem we need to address is that there's no memory barrier > between the store to i_dirty_start and the list_add(). Once that's > added, there's no need for a READ_ONCE here. > > Or have I misunderstood the problem? Thanks for the review. My understanding of your point is that you're worried about a missing "publish" ordering in jbd2_journal_file_inode(): we store jinode->i_dirty_start/end and then list_add() the jinode to t_inode_list, and a core which observes the list entry might miss the prior i_dirty_* stores. Is that the issue you had in mind? If so, for the normal commit path where the list is walked under journal->j_list_lock (e.g. journal_submit_data_buffers() in fs/jbd2/commit.c), spin_lock()/spin_unlock() should already provide the necessary ordering, since both the i_dirty_* updates and the list_add() happen inside the same critical section. The ocfs2 case I was aiming at is different: the filesystem callback is invoked after unlocking journal->j_list_lock and may sleep, so it can't hold j_list_lock but it still reads jinode->i_dirty_start/end while other threads update these fields under the lock. Adding a barrier between the stores and list_add() would not address that concurrent update window. So the itent of READ_ONCE() in ocfs2 is to take a single snapshot of the dirty range values from memory (avoid compiler to reuse a value kept in a register or fold multiple reads). I'm not trying to claim any additional memory ordering from this change. I'll respin and adjust the commit message accordingly. The updated part will say along the lines of: "ocfs2 reads jinode->i_dirty_start/end without journal->j_list_lock (callback may sleep); these fields are updated under j_list_lock in jbd2. Use READ_ONCE() so the callback takes a single snapshot via actual loads from the variable (i.e. don't let the compiler reuse a value kept in a register or fold multiple reads)." Does that match your understanding? Regards, Li​ > > Suggested-by: Jan Kara <jack@suse.com> > > Signed-off-by: Li Chen <me@linux.beauty> > > --- > > fs/ocfs2/journal.c | 7 +++++-- > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c > > index 85239807dec7..7032284cdbd6 100644 > > --- a/fs/ocfs2/journal.c > > +++ b/fs/ocfs2/journal.c > > @@ -902,8 +902,11 @@ int ocfs2_journal_alloc(struct ocfs2_super *osb) > > > > static int ocfs2_journal_submit_inode_data_buffers(struct jbd2_inode *jinode) > > { > > - return filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping, > > - jinode->i_dirty_start, jinode->i_dirty_end); > > + struct address_space *mapping = jinode->i_vfs_inode->i_mapping; > > + loff_t dirty_start = READ_ONCE(jinode->i_dirty_start); > > + loff_t dirty_end = READ_ONCE(jinode->i_dirty_end); > > + > > + return filemap_fdatawrite_range(mapping, dirty_start, dirty_end); > > } > > > > int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty) > > -- > > 2.52.0 > > >
{ "author": "Li Chen <me@linux.beauty>", "date": "Fri, 30 Jan 2026 20:26:40 +0800", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
On Fri, Jan 30, 2026 at 08:26:40PM +0800, Li Chen wrote: I think that's the only issue that exists ... I don't think that's true. I think what you're asserting is that: int *pi; int **ppi; spin_lock(&lock); *pi = 1; *ppi = pi; spin_unlock(&lock); that the store to *pi must be observed before the store to *ppi, and that's not true for a reader which doesn't read the value of lock. The store to *ppi needs a store barrier before it. I don't think that race exists. If it does exist, the READ_ONCE will not help (on 32 bit platforms) because it's a 64-bit quantity and 32-bit platforms do not, in general, have a way to do an atomic 64-bit load (look at the implementation of i_size_read() for the gyrations we go through to assure a non-torn read of that value). I think the prevention of this race occurs at a higher level than "it's updated under a lock". That is, jbd2_journal_file_inode() is never called for a jinode which is currently being operated on by j_submit_inode_data_buffers(). Now, I'm not an expert on the jbd code, so I may be wrong here.
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 30 Jan 2026 16:36:28 +0000", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
Hi Matthew, Thank you very much for the detailed explanation and for your patience. On Sat, 31 Jan 2026 00:36:28 +0800, Matthew Wilcox wrote: Understood. Yes, agreed $B!=(B thank you. I was implicitly assuming the reader had taken the same lock at some point, which is not a valid assumption for a lockless reader. Thanks. I tried to sanity-check whether that $B!H(Bnever called$B!I(B invariant holds in practice. I added a small local-only tracepoint (not for upstream) which fires from jbd2_journal_file_inode() when it observes JI_COMMIT_RUNNING already set on the same jinode: /* fs/jbd2/transaction.c */ if (unlikely(jinode->i_flags & JI_COMMIT_RUNNING)) trace_jbd2_file_inode_commit_running(...); The trace event prints dev, ino, current tid, jinode flags, and the i_transaction / i_next_transaction tids. With an ext4 test (ordered mode) I do see repeated hits. Trace output: ... jbd2_submit_inode_data: dev 7,0 ino 20 ... jbd2_file_inode_commit_running: dev 7,0 ino 20 tid 3 op 0x6 i_flags 0x7 j_tid 2 j_next 3 ... comm python3 So it looks like jbd2_journal_file_inode() can run while JI_COMMIT_RUNNING is set for that inode, i.e. during the window where the commit thread drops j_list_lock around ->j_submit_inode_data_buffers() / ->j_finish_inode_data_buffers(). Given this, would you prefer the series to move towards something like: 1. taking a snapshot of i_dirty_start/end under j_list_lock in the commit path and passing the snapshot to the filesystem callback (so callbacks never read jinode->i_dirty_* locklessly), or 2. introducing a real synchronization mechanism for the dirty range itself (seqcount/atomic64/etc)? 3. something else. I$B!G(Bd be very grateful for guidance on what you consider the most appropriate direction or point out something I'm wrong. Thanks again. Regards, Li
{ "author": "Li Chen <me@linux.beauty>", "date": "Sun, 01 Feb 2026 12:37:36 +0800", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
On Fri 30-01-26 11:12:30, Li Chen wrote: Just one nit below. With that fixed feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> i_vfs_inode never changes so READ_ONCE is pointless here. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Mon, 2 Feb 2026 17:40:45 +0100", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
On Fri 30-01-26 11:12:31, Li Chen wrote: Looks good. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Mon, 2 Feb 2026 17:41:39 +0100", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
On Mon 02-02-26 17:40:45, Jan Kara wrote: One more note: I've realized that for this to work you also need to make jbd2_journal_file_inode() use WRITE_ONCE() when updating i_dirty_start, i_dirty_end and i_flags. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Mon, 2 Feb 2026 17:52:30 +0100", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
This series adds READ_ONCE() for existing lockless reads of jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2. This is based on Jan's suggestion in the review of the ext4 jinode publication race fix. [1] [1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/ Thanks, Li Li Chen (3): jbd2: use READ_ONCE for lockless jinode reads ext4: use READ_ONCE for lockless jinode reads ocfs2: use READ_ONCE for lockless jinode reads fs/ext4/inode.c | 6 ++++-- fs/ext4/super.c | 13 ++++++++----- fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++------- fs/jbd2/transaction.c | 2 +- fs/ocfs2/journal.c | 7 +++++-- 5 files changed, 50 insertions(+), 17 deletions(-) -- 2.52.0
On Fri 30-01-26 16:36:28, Matthew Wilcox wrote: Well, the above reasonably accurately describes the code making jinode visible. The reader code is like: spin_lock(&lock); pi = *ppi; spin_unlock(&lock); work with pi so it is guaranteed to see pi properly initialized. The problem is that "work with pi" can race with other code updating the content of pi which is what this patch is trying to deal with. Sadly the race does exist - journal_submit_data_buffers() on the committing transaction can run in parallel with jbd2_journal_file_inode() in the running transaction. There's nothing preventing that. The problems arising out of that are mostly theoretical but they do exist. In particular you're correct that on 32-bit platforms this will be racy even with READ_ONCE / WRITE_ONCE which I didn't realize. Li, the best way to address this concern would be to modify jbd2_inode to switch i_dirty_start / i_dirty_end to account in PAGE_SIZE units instead of bytes and be of type pgoff_t. jbd2_journal_file_inode() just needs to round the passed ranges properly... Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Mon, 2 Feb 2026 18:17:49 +0100", "thread_id": "jvo5sk46f6cvqmkgetrlybs46kryhxetsvapkmx4tocbdirk3w@ume4qfpsddco.mbox.gz" }
lkml
[PATCH 0/2] Fix port enumeration failure and NULL endpoint issue
I ran CXL mock testing with next branch, I usually hit the following call trace. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] CPU: 3 UID: 0 PID: 42 Comm: kworker/u16:1 Tainted: G O J 6.19.0-rc5-cxl+ #4 PREEMPT(voluntary) Tainted: [O]=OOT_MODULE, [J]=FWCTL Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 Workqueue: async async_run_entry_fn RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 bus_for_each_drv+0x100/0x180 __device_attach_async_helper+0x199/0x250 async_run_entry_fn+0x95/0x430 process_one_work+0x7db/0x1940 After detailed debugging, I identified two independent issues that together leads to the problem. Issue 1: cxlmd->endpoint is initialized to ERR_PTR(-ENXIO) during cxlmd creation, but cxl subsystem usually checks endpoint availability by checking whether it is NULL. As a result, if endpoint port creation fails, some code paths may incorrectly treat the endpoint as available. In the call trace above, endpoint port creation fails but cxl_dpa_to_region() still considers that is available. Patch #1 is used to fix it, the solution is initializing cxlmd->endpoint to NULL by default. Issue 2: The second issue is why CXL port enumeration could be failure. What I observed is when two memdev were trying to enumerate a same port, the first memdev was responsible for port creation and attaching. However, there is a small window between the point where the new port becomes visible(after being added to the device list of cxl bus) and when it is bound to the port driver. During this window, the second memdev may discover the port and acquire its lock while attempting to add its dport, which blocks bus_probe_device() inside device_add(). As a result, the second memdev observes the port as unbound and fails to add its dport. Patch #2 fixes this race by holding the grandparent port lock during dport addition, preventing premature access before driver binding completed. base-commit: 63050be0bfe0b280cce5d701b31940fd84858609 cxl/next Li Ming (2): cxl/core: Set cxlmd->endpoint to NULL by default cxl/core: Hold grandparent port lock for dport adding. drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/port.c | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.43.0
CXL testing environment can trigger following trace Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 __driver_probe_device+0x187/0x3e0 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 commit 29317f8dc6ed ("cxl/mem: Introduce cxl_memdev_attach for CXL-dependent operation") initializes cxlmd->endpoint to ERR_PTR(-ENXIO) in cxl_memdev_alloc(). However, cxl_dpa_to_region() treats a non-NULL cxlmd->endpoint as a valid endpoint. Across the CXL core, endpoint availability is generally determined by checking whether it is NULL. Align with this convention by initializing cxlmd->endpoint to NULL by default. Fixes: 29317f8dc6ed ("cxl/mem: Introduce cxl_memdev_attach for CXL-dependent operation") Signed-off-by: Li Ming <ming.li@zohomail.com> --- drivers/cxl/core/memdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index af3d0cc65138..41a507b5daa4 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -675,7 +675,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds, cxlmd->id = rc; cxlmd->depth = -1; cxlmd->attach = attach; - cxlmd->endpoint = ERR_PTR(-ENXIO); + cxlmd->endpoint = NULL; dev = &cxlmd->dev; device_initialize(dev); -- 2.43.0
{ "author": "Li Ming <ming.li@zohomail.com>", "date": "Sun, 1 Feb 2026 17:30:01 +0800", "thread_id": "20260201093002.1281858-1-ming.li@zohomail.com.mbox.gz" }
lkml
[PATCH 0/2] Fix port enumeration failure and NULL endpoint issue
I ran CXL mock testing with next branch, I usually hit the following call trace. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] CPU: 3 UID: 0 PID: 42 Comm: kworker/u16:1 Tainted: G O J 6.19.0-rc5-cxl+ #4 PREEMPT(voluntary) Tainted: [O]=OOT_MODULE, [J]=FWCTL Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 Workqueue: async async_run_entry_fn RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 bus_for_each_drv+0x100/0x180 __device_attach_async_helper+0x199/0x250 async_run_entry_fn+0x95/0x430 process_one_work+0x7db/0x1940 After detailed debugging, I identified two independent issues that together leads to the problem. Issue 1: cxlmd->endpoint is initialized to ERR_PTR(-ENXIO) during cxlmd creation, but cxl subsystem usually checks endpoint availability by checking whether it is NULL. As a result, if endpoint port creation fails, some code paths may incorrectly treat the endpoint as available. In the call trace above, endpoint port creation fails but cxl_dpa_to_region() still considers that is available. Patch #1 is used to fix it, the solution is initializing cxlmd->endpoint to NULL by default. Issue 2: The second issue is why CXL port enumeration could be failure. What I observed is when two memdev were trying to enumerate a same port, the first memdev was responsible for port creation and attaching. However, there is a small window between the point where the new port becomes visible(after being added to the device list of cxl bus) and when it is bound to the port driver. During this window, the second memdev may discover the port and acquire its lock while attempting to add its dport, which blocks bus_probe_device() inside device_add(). As a result, the second memdev observes the port as unbound and fails to add its dport. Patch #2 fixes this race by holding the grandparent port lock during dport addition, preventing premature access before driver binding completed. base-commit: 63050be0bfe0b280cce5d701b31940fd84858609 cxl/next Li Ming (2): cxl/core: Set cxlmd->endpoint to NULL by default cxl/core: Hold grandparent port lock for dport adding. drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/port.c | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.43.0
When CXL subsystem adds a cxl port to a hierarchy, there is a small window where the new port becomes visible before it is bound to a driver. This happens because device_add() adds a device to bus device list before bus_probe_device() binds it to a driver. So if two cxl memdevs are trying to add a dport to a same port via devm_cxl_enumerate_ports(), the second cxl memdev may observe the port and attempt to add a dport, but fails because the port has not yet been attached to cxl port driver. the sequence is like: CPU 0 CPU 1 devm_cxl_enumerate_ports() # port not found, add it add_port_attach_ep() # hold the parent port lock # to add the new port devm_cxl_create_port() device_add() # Add dev to bus devs list bus_add_device() devm_cxl_enumerate_ports() # found the port find_cxl_port_by_uport() # hold port lock to add a dport device_lock(the port) find_or_add_dport() cxl_port_add_dport() return -ENXIO because port->dev.driver is NULL device_unlock(the port) bus_probe_device() # hold the port lock # for attaching device_lock(the port) attaching the new port device_unlock(the port) To fix this race, require that dport addition holds the parent port lock of the target port. The CXL subsystem already requires holding the parent port lock while attaching a new port. Therefore, successfully acquiring the parent port lock ganrantees that port attaching has completed. Fixes: 4f06d81e7c6a ("cxl: Defer dport allocation for switch ports") Signed-off-by: Li Ming <ming.li@zohomail.com> --- drivers/cxl/core/port.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 54f72452fb06..fef2fe913e1f 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -1817,8 +1817,12 @@ int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd) /* * RP port enumerated by cxl_acpi without dport will * have the dport added here. + * + * Hold the parent port lock here to in case that the + * port can be observed but has not been attached yet. */ - scoped_guard(device, &port->dev) { + scoped_guard(device, &parent_port_of(port)->dev) { + guard(device)(&port->dev); dport = find_or_add_dport(port, dport_dev); if (IS_ERR(dport)) { if (PTR_ERR(dport) == -EAGAIN) -- 2.43.0
{ "author": "Li Ming <ming.li@zohomail.com>", "date": "Sun, 1 Feb 2026 17:30:02 +0800", "thread_id": "20260201093002.1281858-1-ming.li@zohomail.com.mbox.gz" }
lkml
[PATCH 0/2] Fix port enumeration failure and NULL endpoint issue
I ran CXL mock testing with next branch, I usually hit the following call trace. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] CPU: 3 UID: 0 PID: 42 Comm: kworker/u16:1 Tainted: G O J 6.19.0-rc5-cxl+ #4 PREEMPT(voluntary) Tainted: [O]=OOT_MODULE, [J]=FWCTL Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 Workqueue: async async_run_entry_fn RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 bus_for_each_drv+0x100/0x180 __device_attach_async_helper+0x199/0x250 async_run_entry_fn+0x95/0x430 process_one_work+0x7db/0x1940 After detailed debugging, I identified two independent issues that together leads to the problem. Issue 1: cxlmd->endpoint is initialized to ERR_PTR(-ENXIO) during cxlmd creation, but cxl subsystem usually checks endpoint availability by checking whether it is NULL. As a result, if endpoint port creation fails, some code paths may incorrectly treat the endpoint as available. In the call trace above, endpoint port creation fails but cxl_dpa_to_region() still considers that is available. Patch #1 is used to fix it, the solution is initializing cxlmd->endpoint to NULL by default. Issue 2: The second issue is why CXL port enumeration could be failure. What I observed is when two memdev were trying to enumerate a same port, the first memdev was responsible for port creation and attaching. However, there is a small window between the point where the new port becomes visible(after being added to the device list of cxl bus) and when it is bound to the port driver. During this window, the second memdev may discover the port and acquire its lock while attempting to add its dport, which blocks bus_probe_device() inside device_add(). As a result, the second memdev observes the port as unbound and fails to add its dport. Patch #2 fixes this race by holding the grandparent port lock during dport addition, preventing premature access before driver binding completed. base-commit: 63050be0bfe0b280cce5d701b31940fd84858609 cxl/next Li Ming (2): cxl/core: Set cxlmd->endpoint to NULL by default cxl/core: Hold grandparent port lock for dport adding. drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/port.c | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.43.0
On Sun, 1 Feb 2026 17:30:01 +0800 Li Ming <ming.li@zohomail.com> wrote: I had a look at whether it made sense to use use IS_ERR_OR_NULL() to check for validity of the endpoint, but it would be somewhat fiddly and I think you are correct that convention here seems to be NULL means not set. We don't need the error code. One comment inline. Either way nice catch Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> cxlmd has just been allocated with kzalloc so I'd argue we don't need this to be explicitly set at all. Seems like a natural and safe default.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 14:41:03 +0000", "thread_id": "20260201093002.1281858-1-ming.li@zohomail.com.mbox.gz" }
lkml
[PATCH 0/2] Fix port enumeration failure and NULL endpoint issue
I ran CXL mock testing with next branch, I usually hit the following call trace. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] CPU: 3 UID: 0 PID: 42 Comm: kworker/u16:1 Tainted: G O J 6.19.0-rc5-cxl+ #4 PREEMPT(voluntary) Tainted: [O]=OOT_MODULE, [J]=FWCTL Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 Workqueue: async async_run_entry_fn RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 bus_for_each_drv+0x100/0x180 __device_attach_async_helper+0x199/0x250 async_run_entry_fn+0x95/0x430 process_one_work+0x7db/0x1940 After detailed debugging, I identified two independent issues that together leads to the problem. Issue 1: cxlmd->endpoint is initialized to ERR_PTR(-ENXIO) during cxlmd creation, but cxl subsystem usually checks endpoint availability by checking whether it is NULL. As a result, if endpoint port creation fails, some code paths may incorrectly treat the endpoint as available. In the call trace above, endpoint port creation fails but cxl_dpa_to_region() still considers that is available. Patch #1 is used to fix it, the solution is initializing cxlmd->endpoint to NULL by default. Issue 2: The second issue is why CXL port enumeration could be failure. What I observed is when two memdev were trying to enumerate a same port, the first memdev was responsible for port creation and attaching. However, there is a small window between the point where the new port becomes visible(after being added to the device list of cxl bus) and when it is bound to the port driver. During this window, the second memdev may discover the port and acquire its lock while attempting to add its dport, which blocks bus_probe_device() inside device_add(). As a result, the second memdev observes the port as unbound and fails to add its dport. Patch #2 fixes this race by holding the grandparent port lock during dport addition, preventing premature access before driver binding completed. base-commit: 63050be0bfe0b280cce5d701b31940fd84858609 cxl/next Li Ming (2): cxl/core: Set cxlmd->endpoint to NULL by default cxl/core: Hold grandparent port lock for dport adding. drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/port.c | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.43.0
On Sun, 1 Feb 2026 17:30:02 +0800 Li Ming <ming.li@zohomail.com> wrote: Indenting not consistent here as this call is in devm_cxl_enumerate_ports() Spell check. Guarantees Analysis looks reasonable to me, but I'm not hugely confident on this one so would like others to take a close look as well. Question inline. I'm nervous about whether this is the right lock. For unregister_port() (which is easier to track down that the add path locking) the lock taken depends on where the port is that is being unregistered. Specifically root ports are unregistered under parent->uport_dev, not parent->dev.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 15:39:24 +0000", "thread_id": "20260201093002.1281858-1-ming.li@zohomail.com.mbox.gz" }
lkml
[PATCH 0/2] Fix port enumeration failure and NULL endpoint issue
I ran CXL mock testing with next branch, I usually hit the following call trace. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] CPU: 3 UID: 0 PID: 42 Comm: kworker/u16:1 Tainted: G O J 6.19.0-rc5-cxl+ #4 PREEMPT(voluntary) Tainted: [O]=OOT_MODULE, [J]=FWCTL Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 Workqueue: async async_run_entry_fn RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 bus_for_each_drv+0x100/0x180 __device_attach_async_helper+0x199/0x250 async_run_entry_fn+0x95/0x430 process_one_work+0x7db/0x1940 After detailed debugging, I identified two independent issues that together leads to the problem. Issue 1: cxlmd->endpoint is initialized to ERR_PTR(-ENXIO) during cxlmd creation, but cxl subsystem usually checks endpoint availability by checking whether it is NULL. As a result, if endpoint port creation fails, some code paths may incorrectly treat the endpoint as available. In the call trace above, endpoint port creation fails but cxl_dpa_to_region() still considers that is available. Patch #1 is used to fix it, the solution is initializing cxlmd->endpoint to NULL by default. Issue 2: The second issue is why CXL port enumeration could be failure. What I observed is when two memdev were trying to enumerate a same port, the first memdev was responsible for port creation and attaching. However, there is a small window between the point where the new port becomes visible(after being added to the device list of cxl bus) and when it is bound to the port driver. During this window, the second memdev may discover the port and acquire its lock while attempting to add its dport, which blocks bus_probe_device() inside device_add(). As a result, the second memdev observes the port as unbound and fails to add its dport. Patch #2 fixes this race by holding the grandparent port lock during dport addition, preventing premature access before driver binding completed. base-commit: 63050be0bfe0b280cce5d701b31940fd84858609 cxl/next Li Ming (2): cxl/core: Set cxlmd->endpoint to NULL by default cxl/core: Hold grandparent port lock for dport adding. drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/port.c | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.43.0
On Mon, Feb 02, 2026 at 02:41:03PM +0000, Jonathan Cameron wrote: doing validity checks on pointers by checking for null is a pretty common convention kernel-wide, I would consider setting some structure's value to an ERR_PTR to be the aberration. So yeah, good catch Reviewed-by: Gregory Price <gourry@gourry.net> ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 10:48:14 -0500", "thread_id": "20260201093002.1281858-1-ming.li@zohomail.com.mbox.gz" }
lkml
[PATCH 0/2] Fix port enumeration failure and NULL endpoint issue
I ran CXL mock testing with next branch, I usually hit the following call trace. Oops: general protection fault, probably for non-canonical address 0xdffffc0000000092: 0000 [#1] SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000490-0x0000000000000497] CPU: 3 UID: 0 PID: 42 Comm: kworker/u16:1 Tainted: G O J 6.19.0-rc5-cxl+ #4 PREEMPT(voluntary) Tainted: [O]=OOT_MODULE, [J]=FWCTL Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 Workqueue: async async_run_entry_fn RIP: 0010:cxl_dpa_to_region+0x105/0x1f0 [cxl_core] Call Trace: <TASK> cxl_event_trace_record+0xd1/0xa70 [cxl_core] __cxl_event_trace_record+0x12f/0x1e0 [cxl_core] cxl_mem_get_records_log+0x261/0x500 [cxl_core] cxl_mem_get_event_records+0x7c/0xc0 [cxl_core] cxl_mock_mem_probe+0xd38/0x1c60 [cxl_mock_mem] platform_probe+0x9d/0x130 really_probe+0x1c8/0x960 driver_probe_device+0x45/0x120 __device_attach_driver+0x15d/0x280 bus_for_each_drv+0x100/0x180 __device_attach_async_helper+0x199/0x250 async_run_entry_fn+0x95/0x430 process_one_work+0x7db/0x1940 After detailed debugging, I identified two independent issues that together leads to the problem. Issue 1: cxlmd->endpoint is initialized to ERR_PTR(-ENXIO) during cxlmd creation, but cxl subsystem usually checks endpoint availability by checking whether it is NULL. As a result, if endpoint port creation fails, some code paths may incorrectly treat the endpoint as available. In the call trace above, endpoint port creation fails but cxl_dpa_to_region() still considers that is available. Patch #1 is used to fix it, the solution is initializing cxlmd->endpoint to NULL by default. Issue 2: The second issue is why CXL port enumeration could be failure. What I observed is when two memdev were trying to enumerate a same port, the first memdev was responsible for port creation and attaching. However, there is a small window between the point where the new port becomes visible(after being added to the device list of cxl bus) and when it is bound to the port driver. During this window, the second memdev may discover the port and acquire its lock while attempting to add its dport, which blocks bus_probe_device() inside device_add(). As a result, the second memdev observes the port as unbound and fails to add its dport. Patch #2 fixes this race by holding the grandparent port lock during dport addition, preventing premature access before driver binding completed. base-commit: 63050be0bfe0b280cce5d701b31940fd84858609 cxl/next Li Ming (2): cxl/core: Set cxlmd->endpoint to NULL by default cxl/core: Hold grandparent port lock for dport adding. drivers/cxl/core/memdev.c | 2 +- drivers/cxl/core/port.c | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) -- 2.43.0
On Sun, Feb 01, 2026 at 05:30:02PM +0800, Li Ming wrote: With just a a cursory look, I'm immediately concerned that you're fixing a race condition with a lock inversion. Can you guarantee the following is not happening Thread A Thread B ---------------------------- lock(parent) lock(port) lock(port) lock(parent) ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 11:31:45 -0500", "thread_id": "20260201093002.1281858-1-ming.li@zohomail.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Modify online_memory_block() to accept the online type through its arg parameter rather than calling mhp_get_default_online_type() internally. This prepares for allowing callers to specify explicit online types. Update the caller in add_memory_resource() to pass the default online type via a local variable. No functional change. Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- mm/memory_hotplug.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index bc805029da51..87796b617d9e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1337,7 +1337,9 @@ static int check_hotplug_memory_range(u64 start, u64 size) static int online_memory_block(struct memory_block *mem, void *arg) { - mem->online_type = mhp_get_default_online_type(); + int *online_type = arg; + + mem->online_type = *online_type; return device_online(&mem->dev); } @@ -1578,8 +1580,12 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) merge_system_ram_resource(res); /* online pages if requested */ - if (mhp_get_default_online_type() != MMOP_OFFLINE) - walk_memory_blocks(start, size, NULL, online_memory_block); + if (mhp_get_default_online_type() != MMOP_OFFLINE) { + int online_type = mhp_get_default_online_type(); + + walk_memory_blocks(start, size, &online_type, + online_memory_block); + } return ret; error: -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:34 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Enable dax kmem driver to select how to online the memory rather than implicitly depending on the system default. This will allow users of dax to plumb through a preferred auto-online policy for their region. Refactor and new interface: Add __add_memory_driver_managed() which accepts an explicit online_type and export mhp_get_default_online_type() so callers can pass it when they want the default behavior. Refactor: Extract __add_memory_resource() to take an explicit online_type parameter, and update add_memory_resource() to pass the system default. No functional change for existing users. Cc: David Hildenbrand <david@kernel.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- include/linux/memory_hotplug.h | 3 ++ mm/memory_hotplug.c | 91 ++++++++++++++++++++++++---------- 2 files changed, 67 insertions(+), 27 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index f2f16cdd73ee..1eb63d1a247d 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -293,6 +293,9 @@ extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); extern int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); extern int add_memory_resource(int nid, struct resource *resource, mhp_t mhp_flags); +int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags, + int online_type); extern int add_memory_driver_managed(int nid, u64 start, u64 size, const char *resource_name, mhp_t mhp_flags); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 87796b617d9e..d3ca95b872bd 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -239,6 +239,7 @@ int mhp_get_default_online_type(void) return mhp_default_online_type; } +EXPORT_SYMBOL_GPL(mhp_get_default_online_type); void mhp_set_default_online_type(int online_type) { @@ -1490,7 +1491,8 @@ static int create_altmaps_and_memory_blocks(int nid, struct memory_group *group, * * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */ -int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) +static int __add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags, + int online_type) { struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) }; enum memblock_flags memblock_flags = MEMBLOCK_NONE; @@ -1580,12 +1582,9 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) merge_system_ram_resource(res); /* online pages if requested */ - if (mhp_get_default_online_type() != MMOP_OFFLINE) { - int online_type = mhp_get_default_online_type(); - + if (online_type != MMOP_OFFLINE) walk_memory_blocks(start, size, &online_type, online_memory_block); - } return ret; error: @@ -1601,7 +1600,13 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) return ret; } -/* requires device_hotplug_lock, see add_memory_resource() */ +int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags) +{ + return __add_memory_resource(nid, res, mhp_flags, + mhp_get_default_online_type()); +} + +/* requires device_hotplug_lock, see __add_memory_resource() */ int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags) { struct resource *res; @@ -1629,29 +1634,24 @@ int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags) } EXPORT_SYMBOL_GPL(add_memory); -/* - * Add special, driver-managed memory to the system as system RAM. Such - * memory is not exposed via the raw firmware-provided memmap as system - * RAM, instead, it is detected and added by a driver - during cold boot, - * after a reboot, and after kexec. - * - * Reasons why this memory should not be used for the initial memmap of a - * kexec kernel or for placing kexec images: - * - The booting kernel is in charge of determining how this memory will be - * used (e.g., use persistent memory as system RAM) - * - Coordination with a hypervisor is required before this memory - * can be used (e.g., inaccessible parts). +/** + * __add_memory_driver_managed - add driver-managed memory with explicit online_type + * @nid: NUMA node ID where the memory will be added + * @start: Start physical address of the memory range + * @size: Size of the memory range in bytes + * @resource_name: Resource name in format "System RAM ($DRIVER)" + * @mhp_flags: Memory hotplug flags + * @online_type: Online behavior (MMOP_ONLINE, MMOP_ONLINE_KERNEL, + * MMOP_ONLINE_MOVABLE, or MMOP_OFFLINE) * - * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided - * memory map") are created. Also, the created memory resource is flagged - * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case - * this memory as well (esp., not place kexec images onto it). + * Add driver-managed memory with explicit online_type specification. + * The resource_name must have the format "System RAM ($DRIVER)". * - * The resource_name (visible via /proc/iomem) has to have the format - * "System RAM ($DRIVER)". + * Return: 0 on success, negative error code on failure. */ -int add_memory_driver_managed(int nid, u64 start, u64 size, - const char *resource_name, mhp_t mhp_flags) +int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags, + int online_type) { struct resource *res; int rc; @@ -1661,6 +1661,9 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, resource_name[strlen(resource_name) - 1] != ')') return -EINVAL; + if (online_type < 0 || online_type > MMOP_ONLINE_MOVABLE) + return -EINVAL; + lock_device_hotplug(); res = register_memory_resource(start, size, resource_name); @@ -1669,7 +1672,7 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, goto out_unlock; } - rc = add_memory_resource(nid, res, mhp_flags); + rc = __add_memory_resource(nid, res, mhp_flags, online_type); if (rc < 0) release_memory_resource(res); @@ -1677,6 +1680,40 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, unlock_device_hotplug(); return rc; } +EXPORT_SYMBOL_FOR_MODULES(__add_memory_driver_managed, "kmem"); + +/* + * Add special, driver-managed memory to the system as system RAM. Such + * memory is not exposed via the raw firmware-provided memmap as system + * RAM, instead, it is detected and added by a driver - during cold boot, + * after a reboot, and after kexec. + * + * Reasons why this memory should not be used for the initial memmap of a + * kexec kernel or for placing kexec images: + * - The booting kernel is in charge of determining how this memory will be + * used (e.g., use persistent memory as system RAM) + * - Coordination with a hypervisor is required before this memory + * can be used (e.g., inaccessible parts). + * + * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided + * memory map") are created. Also, the created memory resource is flagged + * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case + * this memory as well (esp., not place kexec images onto it). + * + * The resource_name (visible via /proc/iomem) has to have the format + * "System RAM ($DRIVER)". + * + * Memory will be onlined using the system default online type. + * + * Returns 0 on success, negative error code on failure. + */ +int add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, mhp_t mhp_flags) +{ + return __add_memory_driver_managed(nid, start, size, resource_name, + mhp_flags, + mhp_get_default_online_type()); +} EXPORT_SYMBOL_GPL(add_memory_driver_managed); /* -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:35 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
There is no way for drivers leveraging dax_kmem to plumb through a preferred auto-online policy - the system default policy is forced. Add online_type field to DAX device creation path to allow drivers to specify an auto-online policy when using the kmem driver. Current callers initialize online_type to mhp_get_default_online_type() which resolves to the system default (memhp_default_online_type). No functional change to existing drivers. Cc:David Hildenbrand <david@kernel.org> Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/region.c | 2 ++ drivers/cxl/cxl.h | 1 + drivers/dax/bus.c | 3 +++ drivers/dax/bus.h | 1 + drivers/dax/cxl.c | 1 + drivers/dax/dax-private.h | 2 ++ drivers/dax/hmem/hmem.c | 2 ++ drivers/dax/kmem.c | 13 +++++++++++-- drivers/dax/pmem.c | 2 ++ 9 files changed, 25 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 5bd1213737fa..eef5d5fe3f95 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2022 Intel Corporation. All rights reserved. */ #include <linux/memregion.h> +#include <linux/memory_hotplug.h> #include <linux/genalloc.h> #include <linux/debugfs.h> #include <linux/device.h> @@ -3459,6 +3460,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr) if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); + cxlr_dax->online_type = mhp_get_default_online_type(); dev = &cxlr_dax->dev; rc = dev_set_name(dev, "dax_region%d", cxlr->id); if (rc) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ba17fa86d249..07d57d13f4c7 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -591,6 +591,7 @@ struct cxl_dax_region { struct device dev; struct cxl_region *cxlr; struct range hpa_range; + int online_type; /* MMOP_ value for kmem driver */ }; /** diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c index fde29e0ad68b..121a6dd0afe7 100644 --- a/drivers/dax/bus.c +++ b/drivers/dax/bus.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2017-2018 Intel Corporation. All rights reserved. */ #include <linux/memremap.h> +#include <linux/memory_hotplug.h> #include <linux/device.h> #include <linux/mutex.h> #include <linux/list.h> @@ -395,6 +396,7 @@ static ssize_t create_store(struct device *dev, struct device_attribute *attr, .size = 0, .id = -1, .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; struct dev_dax *dev_dax = __devm_create_dev_dax(&data); @@ -1494,6 +1496,7 @@ static struct dev_dax *__devm_create_dev_dax(struct dev_dax_data *data) ida_init(&dev_dax->ida); dev_dax->memmap_on_memory = data->memmap_on_memory; + dev_dax->online_type = data->online_type; inode = dax_inode(dax_dev); dev->devt = inode->i_rdev; diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index cbbf64443098..4ac92a4edfe7 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -24,6 +24,7 @@ struct dev_dax_data { resource_size_t size; int id; bool memmap_on_memory; + int online_type; /* MMOP_ value for kmem driver */ }; struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data); diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 13cd94d32ff7..856a0cd24f3b 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -27,6 +27,7 @@ static int cxl_dax_region_probe(struct device *dev) .id = -1, .size = range_len(&cxlr_dax->hpa_range), .memmap_on_memory = true, + .online_type = cxlr_dax->online_type, }; return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data)); diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index c6ae27c982f4..9559718cc988 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -77,6 +77,7 @@ struct dev_dax_range { * @dev: device core * @pgmap: pgmap for memmap setup / lifetime (driver owned) * @memmap_on_memory: allow kmem to put the memmap in the memory + * @online_type: MMOP_* online type for memory hotplug * @nr_range: size of @ranges * @ranges: range tuples of memory used */ @@ -91,6 +92,7 @@ struct dev_dax { struct device dev; struct dev_pagemap *pgmap; bool memmap_on_memory; + int online_type; int nr_range; struct dev_dax_range *ranges; }; diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c index c18451a37e4f..119914b08fd9 100644 --- a/drivers/dax/hmem/hmem.c +++ b/drivers/dax/hmem/hmem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/platform_device.h> +#include <linux/memory_hotplug.h> #include <linux/memregion.h> #include <linux/module.h> #include <linux/dax.h> @@ -36,6 +37,7 @@ static int dax_hmem_probe(struct platform_device *pdev) .id = -1, .size = region_idle ? 0 : range_len(&mri->range), .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data)); diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index c036e4d0b610..550dc605229e 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -16,6 +16,11 @@ #include "dax-private.h" #include "bus.h" +/* Internal function exported only to kmem module */ +extern int __add_memory_driver_managed(int nid, u64 start, u64 size, + const char *resource_name, + mhp_t mhp_flags, int online_type); + /* * Default abstract distance assigned to the NUMA node onlined * by DAX/kmem if the low level platform driver didn't initialize @@ -72,6 +77,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) struct dax_kmem_data *data; struct memory_dev_type *mtype; int i, rc, mapped = 0; + int online_type; mhp_t mhp_flags; int numa_node; int adist = MEMTIER_DEFAULT_DAX_ADISTANCE; @@ -134,6 +140,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) goto err_reg_mgid; data->mgid = rc; + online_type = dev_dax->online_type; + for (i = 0; i < dev_dax->nr_range; i++) { struct resource *res; struct range range; @@ -174,8 +182,9 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) * Ensure that future kexec'd kernels will not treat * this as RAM automatically. */ - rc = add_memory_driver_managed(data->mgid, range.start, - range_len(&range), kmem_name, mhp_flags); + rc = __add_memory_driver_managed(data->mgid, range.start, + range_len(&range), kmem_name, mhp_flags, + online_type); if (rc) { dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n", diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index bee93066a849..a5925146b09f 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ +#include <linux/memory_hotplug.h> #include <linux/memremap.h> #include <linux/module.h> #include "../nvdimm/pfn.h" @@ -63,6 +64,7 @@ static struct dev_dax *__dax_pmem_probe(struct device *dev) .pgmap = &pgmap, .size = range_len(&range), .memmap_on_memory = false, + .online_type = mhp_get_default_online_type(), }; return devm_create_dev_dax(&data); -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:36 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Move the pmem region driver logic from region.c into pmem_region.c. No functional changes. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/pmem_region.c | 191 +++++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 184 ------------------------------- 4 files changed, 193 insertions(+), 184 deletions(-) create mode 100644 drivers/cxl/core/pmem_region.c diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 5ad8fef210b5..23269c81fd44 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -17,6 +17,7 @@ cxl_core-y += cdat.o cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o cxl_core-$(CONFIG_CXL_EDAC_MEM_FEATURES) += edac.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index dd987ef2def5..26991de12d76 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); +int devm_cxl_add_pmem_region(struct cxl_region *cxlr); #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, diff --git a/drivers/cxl/core/pmem_region.c b/drivers/cxl/core/pmem_region.c new file mode 100644 index 000000000000..81b66e548bb5 --- /dev/null +++ b/drivers/cxl/core/pmem_region.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. All rights reserved. */ +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_pmem_region_release(struct device *dev) +{ + struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev); + int i; + + for (i = 0; i < cxlr_pmem->nr_mappings; i++) { + struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd; + + put_device(&cxlmd->dev); + } + + kfree(cxlr_pmem); +} + +static const struct attribute_group *cxl_pmem_region_attribute_groups[] = { + &cxl_base_attribute_group, + NULL, +}; + +const struct device_type cxl_pmem_region_type = { + .name = "cxl_pmem_region", + .release = cxl_pmem_region_release, + .groups = cxl_pmem_region_attribute_groups, +}; +bool is_cxl_pmem_region(struct device *dev) +{ + return dev->type == &cxl_pmem_region_type; +} +EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL"); + +struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev), + "not a cxl_pmem_region device\n")) + return NULL; + return container_of(dev, struct cxl_pmem_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL"); +static struct lock_class_key cxl_pmem_region_key; + +static int cxl_pmem_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_nvdimm_bridge *cxl_nvb; + struct device *dev; + int i; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return -ENXIO; + + struct cxl_pmem_region *cxlr_pmem __free(kfree) = + kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL); + if (!cxlr_pmem) + return -ENOMEM; + + cxlr_pmem->hpa_range.start = p->res->start; + cxlr_pmem->hpa_range.end = p->res->end; + + /* Snapshot the region configuration underneath the cxl_rwsem.region */ + cxlr_pmem->nr_mappings = p->nr_targets; + for (i = 0; i < p->nr_targets; i++) { + struct cxl_endpoint_decoder *cxled = p->targets[i]; + struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); + struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i]; + + /* + * Regions never span CXL root devices, so by definition the + * bridge for one device is the same for all. + */ + if (i == 0) { + cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint); + if (!cxl_nvb) + return -ENODEV; + cxlr->cxl_nvb = cxl_nvb; + } + m->cxlmd = cxlmd; + get_device(&cxlmd->dev); + m->start = cxled->dpa_res->start; + m->size = resource_size(cxled->dpa_res); + m->position = i; + } + + dev = &cxlr_pmem->dev; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_pmem_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_pmem_region_type; + cxlr_pmem->cxlr = cxlr; + cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem); + + return 0; +} + +static void cxlr_pmem_unregister(void *_cxlr_pmem) +{ + struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem; + struct cxl_region *cxlr = cxlr_pmem->cxlr; + struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; + + /* + * Either the bridge is in ->remove() context under the device_lock(), + * or cxlr_release_nvdimm() is cancelling the bridge's release action + * for @cxlr_pmem and doing it itself (while manually holding the bridge + * lock). + */ + device_lock_assert(&cxl_nvb->dev); + cxlr->cxlr_pmem = NULL; + cxlr_pmem->cxlr = NULL; + device_unregister(&cxlr_pmem->dev); +} + +static void cxlr_release_nvdimm(void *_cxlr) +{ + struct cxl_region *cxlr = _cxlr; + struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; + + scoped_guard(device, &cxl_nvb->dev) { + if (cxlr->cxlr_pmem) + devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, + cxlr->cxlr_pmem); + } + cxlr->cxl_nvb = NULL; + put_device(&cxl_nvb->dev); +} + +/** + * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge + * @cxlr: parent CXL region for this pmem region bridge device + * + * Return: 0 on success negative error code on failure. + */ +int devm_cxl_add_pmem_region(struct cxl_region *cxlr) +{ + struct cxl_pmem_region *cxlr_pmem; + struct cxl_nvdimm_bridge *cxl_nvb; + struct device *dev; + int rc; + + rc = cxl_pmem_region_alloc(cxlr); + if (rc) + return rc; + cxlr_pmem = cxlr->cxlr_pmem; + cxl_nvb = cxlr->cxl_nvb; + + dev = &cxlr_pmem->dev; + rc = dev_set_name(dev, "pmem_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + scoped_guard(device, &cxl_nvb->dev) { + if (cxl_nvb->dev.driver) + rc = devm_add_action_or_reset(&cxl_nvb->dev, + cxlr_pmem_unregister, + cxlr_pmem); + else + rc = -ENXIO; + } + + if (rc) + goto err_bridge; + + /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */ + return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr); + +err: + put_device(dev); +err_bridge: + put_device(&cxl_nvb->dev); + cxlr->cxl_nvb = NULL; + return rc; +} + + diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index e4097c464ed3..fc56f8f03805 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2747,46 +2747,6 @@ static ssize_t delete_region_store(struct device *dev, } DEVICE_ATTR_WO(delete_region); -static void cxl_pmem_region_release(struct device *dev) -{ - struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev); - int i; - - for (i = 0; i < cxlr_pmem->nr_mappings; i++) { - struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd; - - put_device(&cxlmd->dev); - } - - kfree(cxlr_pmem); -} - -static const struct attribute_group *cxl_pmem_region_attribute_groups[] = { - &cxl_base_attribute_group, - NULL, -}; - -const struct device_type cxl_pmem_region_type = { - .name = "cxl_pmem_region", - .release = cxl_pmem_region_release, - .groups = cxl_pmem_region_attribute_groups, -}; - -bool is_cxl_pmem_region(struct device *dev) -{ - return dev->type == &cxl_pmem_region_type; -} -EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL"); - -struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev) -{ - if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev), - "not a cxl_pmem_region device\n")) - return NULL; - return container_of(dev, struct cxl_pmem_region, dev); -} -EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL"); - struct cxl_poison_context { struct cxl_port *port; int part; @@ -3236,64 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, return -ENXIO; } -static struct lock_class_key cxl_pmem_region_key; - -static int cxl_pmem_region_alloc(struct cxl_region *cxlr) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_nvdimm_bridge *cxl_nvb; - struct device *dev; - int i; - - guard(rwsem_read)(&cxl_rwsem.region); - if (p->state != CXL_CONFIG_COMMIT) - return -ENXIO; - - struct cxl_pmem_region *cxlr_pmem __free(kfree) = - kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL); - if (!cxlr_pmem) - return -ENOMEM; - - cxlr_pmem->hpa_range.start = p->res->start; - cxlr_pmem->hpa_range.end = p->res->end; - - /* Snapshot the region configuration underneath the cxl_rwsem.region */ - cxlr_pmem->nr_mappings = p->nr_targets; - for (i = 0; i < p->nr_targets; i++) { - struct cxl_endpoint_decoder *cxled = p->targets[i]; - struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); - struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i]; - - /* - * Regions never span CXL root devices, so by definition the - * bridge for one device is the same for all. - */ - if (i == 0) { - cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint); - if (!cxl_nvb) - return -ENODEV; - cxlr->cxl_nvb = cxl_nvb; - } - m->cxlmd = cxlmd; - get_device(&cxlmd->dev); - m->start = cxled->dpa_res->start; - m->size = resource_size(cxled->dpa_res); - m->position = i; - } - - dev = &cxlr_pmem->dev; - device_initialize(dev); - lockdep_set_class(&dev->mutex, &cxl_pmem_region_key); - device_set_pm_not_required(dev); - dev->parent = &cxlr->dev; - dev->bus = &cxl_bus_type; - dev->type = &cxl_pmem_region_type; - cxlr_pmem->cxlr = cxlr; - cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem); - - return 0; -} - static void cxl_dax_region_release(struct device *dev) { struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); @@ -3357,92 +3259,6 @@ static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) return cxlr_dax; } -static void cxlr_pmem_unregister(void *_cxlr_pmem) -{ - struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem; - struct cxl_region *cxlr = cxlr_pmem->cxlr; - struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; - - /* - * Either the bridge is in ->remove() context under the device_lock(), - * or cxlr_release_nvdimm() is cancelling the bridge's release action - * for @cxlr_pmem and doing it itself (while manually holding the bridge - * lock). - */ - device_lock_assert(&cxl_nvb->dev); - cxlr->cxlr_pmem = NULL; - cxlr_pmem->cxlr = NULL; - device_unregister(&cxlr_pmem->dev); -} - -static void cxlr_release_nvdimm(void *_cxlr) -{ - struct cxl_region *cxlr = _cxlr; - struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; - - scoped_guard(device, &cxl_nvb->dev) { - if (cxlr->cxlr_pmem) - devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, - cxlr->cxlr_pmem); - } - cxlr->cxl_nvb = NULL; - put_device(&cxl_nvb->dev); -} - -/** - * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge - * @cxlr: parent CXL region for this pmem region bridge device - * - * Return: 0 on success negative error code on failure. - */ -static int devm_cxl_add_pmem_region(struct cxl_region *cxlr) -{ - struct cxl_pmem_region *cxlr_pmem; - struct cxl_nvdimm_bridge *cxl_nvb; - struct device *dev; - int rc; - - rc = cxl_pmem_region_alloc(cxlr); - if (rc) - return rc; - cxlr_pmem = cxlr->cxlr_pmem; - cxl_nvb = cxlr->cxl_nvb; - - dev = &cxlr_pmem->dev; - rc = dev_set_name(dev, "pmem_region%d", cxlr->id); - if (rc) - goto err; - - rc = device_add(dev); - if (rc) - goto err; - - dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), - dev_name(dev)); - - scoped_guard(device, &cxl_nvb->dev) { - if (cxl_nvb->dev.driver) - rc = devm_add_action_or_reset(&cxl_nvb->dev, - cxlr_pmem_unregister, - cxlr_pmem); - else - rc = -ENXIO; - } - - if (rc) - goto err_bridge; - - /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */ - return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr); - -err: - put_device(dev); -err_bridge: - put_device(&cxl_nvb->dev); - cxlr->cxl_nvb = NULL; - return rc; -} - static void cxlr_dax_unregister(void *_cxlr_dax) { struct cxl_dax_region *cxlr_dax = _cxlr_dax; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:38 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Move the CXL DAX region device infrastructure from region.c into a new dax_region.c file. No functional changes. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/dax_region.c | 113 ++++++++++++++++++++++++++++++++++ drivers/cxl/core/region.c | 102 ------------------------------ 4 files changed, 115 insertions(+), 102 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 23269c81fd44..36f284d7c500 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -17,6 +17,7 @@ cxl_core-y += cdat.o cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_REGION) += dax_region.o cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 26991de12d76..217dd708a2a6 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); +int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); #else diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c new file mode 100644 index 000000000000..0602db5f7248 --- /dev/null +++ b/drivers/cxl/core/dax_region.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright(c) 2022 Intel Corporation. All rights reserved. + * Copyright(c) 2026 Meta Technologies Inc. All rights reserved. + */ +#include <linux/memory_hotplug.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_dax_region_release(struct device *dev) +{ + struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); + + kfree(cxlr_dax); +} + +static const struct attribute_group *cxl_dax_region_attribute_groups[] = { + &cxl_base_attribute_group, + NULL, +}; + +const struct device_type cxl_dax_region_type = { + .name = "cxl_dax_region", + .release = cxl_dax_region_release, + .groups = cxl_dax_region_attribute_groups, +}; + +static bool is_cxl_dax_region(struct device *dev) +{ + return dev->type == &cxl_dax_region_type; +} + +struct cxl_dax_region *to_cxl_dax_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev), + "not a cxl_dax_region device\n")) + return NULL; + return container_of(dev, struct cxl_dax_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL"); + +static struct lock_class_key cxl_dax_region_key; + +static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_dax_region *cxlr_dax; + struct device *dev; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return ERR_PTR(-ENXIO); + + cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL); + if (!cxlr_dax) + return ERR_PTR(-ENOMEM); + + cxlr_dax->hpa_range.start = p->res->start; + cxlr_dax->hpa_range.end = p->res->end; + + dev = &cxlr_dax->dev; + cxlr_dax->cxlr = cxlr; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_dax_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_dax_region_type; + + return cxlr_dax; +} + +static void cxlr_dax_unregister(void *_cxlr_dax) +{ + struct cxl_dax_region *cxlr_dax = _cxlr_dax; + + device_unregister(&cxlr_dax->dev); +} + +int devm_cxl_add_dax_region(struct cxl_region *cxlr, + enum dax_driver_type dax_driver) +{ + struct cxl_dax_region *cxlr_dax; + struct device *dev; + int rc; + + cxlr_dax = cxl_dax_region_alloc(cxlr); + if (IS_ERR(cxlr_dax)) + return PTR_ERR(cxlr_dax); + + cxlr_dax->online_type = mhp_get_default_online_type(); + cxlr_dax->dax_driver = dax_driver; + dev = &cxlr_dax->dev; + rc = dev_set_name(dev, "dax_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, + cxlr_dax); +err: + put_device(dev); + return rc; +} diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index fc56f8f03805..61ec939c1462 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3196,108 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset, return -ENXIO; } -static void cxl_dax_region_release(struct device *dev) -{ - struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); - - kfree(cxlr_dax); -} - -static const struct attribute_group *cxl_dax_region_attribute_groups[] = { - &cxl_base_attribute_group, - NULL, -}; - -const struct device_type cxl_dax_region_type = { - .name = "cxl_dax_region", - .release = cxl_dax_region_release, - .groups = cxl_dax_region_attribute_groups, -}; - -static bool is_cxl_dax_region(struct device *dev) -{ - return dev->type == &cxl_dax_region_type; -} - -struct cxl_dax_region *to_cxl_dax_region(struct device *dev) -{ - if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev), - "not a cxl_dax_region device\n")) - return NULL; - return container_of(dev, struct cxl_dax_region, dev); -} -EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL"); - -static struct lock_class_key cxl_dax_region_key; - -static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_dax_region *cxlr_dax; - struct device *dev; - - guard(rwsem_read)(&cxl_rwsem.region); - if (p->state != CXL_CONFIG_COMMIT) - return ERR_PTR(-ENXIO); - - cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL); - if (!cxlr_dax) - return ERR_PTR(-ENOMEM); - - cxlr_dax->hpa_range.start = p->res->start; - cxlr_dax->hpa_range.end = p->res->end; - - dev = &cxlr_dax->dev; - cxlr_dax->cxlr = cxlr; - device_initialize(dev); - lockdep_set_class(&dev->mutex, &cxl_dax_region_key); - device_set_pm_not_required(dev); - dev->parent = &cxlr->dev; - dev->bus = &cxl_bus_type; - dev->type = &cxl_dax_region_type; - - return cxlr_dax; -} - -static void cxlr_dax_unregister(void *_cxlr_dax) -{ - struct cxl_dax_region *cxlr_dax = _cxlr_dax; - - device_unregister(&cxlr_dax->dev); -} - -static int devm_cxl_add_dax_region(struct cxl_region *cxlr, - enum dax_driver_type dax_driver) -{ - struct cxl_dax_region *cxlr_dax; - struct device *dev; - int rc; - - cxlr_dax = cxl_dax_region_alloc(cxlr); - if (IS_ERR(cxlr_dax)) - return PTR_ERR(cxlr_dax); - - cxlr_dax->online_type = mhp_get_default_online_type(); - cxlr_dax->dax_driver = dax_driver; - dev = &cxlr_dax->dev; - rc = dev_set_name(dev, "dax_region%d", cxlr->id); - if (rc) - goto err; - - rc = device_add(dev); - if (rc) - goto err; - - dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), - dev_name(dev)); - - return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, - cxlr_dax); -err: - put_device(dev); - return rc; -} - static int match_decoder_by_range(struct device *dev, const void *data) { const struct range *r1, *r2 = data; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:39 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Add a new cxl_devdax_region driver that probes CXL regions in device dax mode and creates dax_region devices. This allows explicit binding to the device_dax dax driver instead of the kmem driver. Exports to_cxl_region() to core.h so it can be used by the driver. Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/core.h | 2 ++ drivers/cxl/core/dax_region.c | 16 ++++++++++++++++ drivers/cxl/core/region.c | 21 +++++++++++++++++---- drivers/cxl/cxl.h | 1 + 4 files changed, 36 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 217dd708a2a6..ea4df8abc2ad 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -46,6 +46,8 @@ u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); +extern struct cxl_driver cxl_devdax_region_driver; + #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa) diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c index 0602db5f7248..391d51e5ec37 100644 --- a/drivers/cxl/core/dax_region.c +++ b/drivers/cxl/core/dax_region.c @@ -111,3 +111,19 @@ int devm_cxl_add_dax_region(struct cxl_region *cxlr, put_device(dev); return rc; } + +static int cxl_devdax_region_driver_probe(struct device *dev) +{ + struct cxl_region *cxlr = to_cxl_region(dev); + + if (cxlr->mode != CXL_PARTMODE_RAM) + return -ENODEV; + + return devm_cxl_add_dax_region(cxlr, DAXDRV_DEVICE_TYPE); +} + +struct cxl_driver cxl_devdax_region_driver = { + .name = "cxl_devdax_region", + .probe = cxl_devdax_region_driver_probe, + .id = CXL_DEVICE_REGION, +}; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 61ec939c1462..6200ca1cc2dd 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -39,8 +39,6 @@ */ static nodemask_t nodemask_region_seen = NODE_MASK_NONE; -static struct cxl_region *to_cxl_region(struct device *dev); - #define __ACCESS_ATTR_RO(_level, _name) { \ .attr = { .name = __stringify(_name), .mode = 0444 }, \ .show = _name##_access##_level##_show, \ @@ -2430,7 +2428,7 @@ bool is_cxl_region(struct device *dev) } EXPORT_SYMBOL_NS_GPL(is_cxl_region, "CXL"); -static struct cxl_region *to_cxl_region(struct device *dev) +struct cxl_region *to_cxl_region(struct device *dev) { if (dev_WARN_ONCE(dev, dev->type != &cxl_region_type, "not a cxl_region device\n")) @@ -3726,11 +3724,26 @@ static struct cxl_driver cxl_region_driver = { int cxl_region_init(void) { - return cxl_driver_register(&cxl_region_driver); + int rc; + + rc = cxl_driver_register(&cxl_region_driver); + if (rc) + return rc; + + rc = cxl_driver_register(&cxl_devdax_region_driver); + if (rc) + goto err_dax; + + return 0; + +err_dax: + cxl_driver_unregister(&cxl_region_driver); + return rc; } void cxl_region_exit(void) { + cxl_driver_unregister(&cxl_devdax_region_driver); cxl_driver_unregister(&cxl_region_driver); } diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index c06a239c0008..674d5f870c70 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -859,6 +859,7 @@ int cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds, struct cxl_endpoint_dvsec_info *info); bool is_cxl_region(struct device *dev); +struct cxl_region *to_cxl_region(struct device *dev); extern const struct bus_type cxl_bus_type; -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:40 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
CXL regions may wish not to auto-configure their memory as dax kmem, but the current plumbing defaults all cxl-created dax devices to the kmem driver. This exposes them to hotplug policy, even if the user intends to use the memory as a dax device. Add plumbing to allow CXL drivers to select whether a DAX region should default to kmem (DAXDRV_KMEM_TYPE) or device (DAXDRV_DEVICE_TYPE). Add a 'dax_driver' field to struct cxl_dax_region and update devm_cxl_add_dax_region() to take a dax_driver_type parameter. In drivers/dax/cxl.c, the IORESOURCE_DAX_KMEM flag used by dax driver matching code is now set conditionally based on dax_region->dax_driver. Exports `enum dax_driver_type` to linux/dax.h for use in the cxl driver. All current callers pass DAXDRV_KMEM_TYPE for backward compatibility. Cc: John Groves <john@jagalactic.com> Signed-off-by: Gregory Price <gourry@gourry.net> --- drivers/cxl/core/core.h | 1 + drivers/cxl/core/region.c | 6 ++++-- drivers/cxl/cxl.h | 2 ++ drivers/dax/bus.h | 6 +----- drivers/dax/cxl.c | 6 +++++- include/linux/dax.h | 5 +++++ 6 files changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 1fb66132b777..dd987ef2def5 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -6,6 +6,7 @@ #include <cxl/mailbox.h> #include <linux/rwsem.h> +#include <linux/dax.h> extern const struct device_type cxl_nvdimm_bridge_type; extern const struct device_type cxl_nvdimm_type; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index eef5d5fe3f95..e4097c464ed3 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3450,7 +3450,8 @@ static void cxlr_dax_unregister(void *_cxlr_dax) device_unregister(&cxlr_dax->dev); } -static int devm_cxl_add_dax_region(struct cxl_region *cxlr) +static int devm_cxl_add_dax_region(struct cxl_region *cxlr, + enum dax_driver_type dax_driver) { struct cxl_dax_region *cxlr_dax; struct device *dev; @@ -3461,6 +3462,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr) return PTR_ERR(cxlr_dax); cxlr_dax->online_type = mhp_get_default_online_type(); + cxlr_dax->dax_driver = dax_driver; dev = &cxlr_dax->dev; rc = dev_set_name(dev, "dax_region%d", cxlr->id); if (rc) @@ -3994,7 +3996,7 @@ static int cxl_region_probe(struct device *dev) p->res->start, p->res->end, cxlr, is_system_ram) > 0) return 0; - return devm_cxl_add_dax_region(cxlr); + return devm_cxl_add_dax_region(cxlr, DAXDRV_KMEM_TYPE); default: dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", cxlr->mode); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 07d57d13f4c7..c06a239c0008 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -12,6 +12,7 @@ #include <linux/node.h> #include <linux/io.h> #include <linux/range.h> +#include <linux/dax.h> extern const struct nvdimm_security_ops *cxl_security_ops; @@ -592,6 +593,7 @@ struct cxl_dax_region { struct cxl_region *cxlr; struct range hpa_range; int online_type; /* MMOP_ value for kmem driver */ + enum dax_driver_type dax_driver; }; /** diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index 4ac92a4edfe7..9144593b4029 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -2,6 +2,7 @@ /* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */ #ifndef __DAX_BUS_H__ #define __DAX_BUS_H__ +#include <linux/dax.h> #include <linux/device.h> #include <linux/range.h> @@ -29,11 +30,6 @@ struct dev_dax_data { struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data); -enum dax_driver_type { - DAXDRV_KMEM_TYPE, - DAXDRV_DEVICE_TYPE, -}; - struct dax_device_driver { struct device_driver drv; struct list_head ids; diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 856a0cd24f3b..b13ecc2f9806 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -11,14 +11,18 @@ static int cxl_dax_region_probe(struct device *dev) struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev); int nid = phys_to_target_node(cxlr_dax->hpa_range.start); struct cxl_region *cxlr = cxlr_dax->cxlr; + unsigned long flags = 0; struct dax_region *dax_region; struct dev_dax_data data; + if (cxlr_dax->dax_driver == DAXDRV_KMEM_TYPE) + flags |= IORESOURCE_DAX_KMEM; + if (nid == NUMA_NO_NODE) nid = memory_add_physaddr_to_nid(cxlr_dax->hpa_range.start); dax_region = alloc_dax_region(dev, cxlr->id, &cxlr_dax->hpa_range, nid, - PMD_SIZE, IORESOURCE_DAX_KMEM); + PMD_SIZE, flags); if (!dax_region) return -ENOMEM; diff --git a/include/linux/dax.h b/include/linux/dax.h index bf103f317cac..e62f92d0ace1 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -19,6 +19,11 @@ enum dax_access_mode { DAX_RECOVERY_WRITE, }; +enum dax_driver_type { + DAXDRV_KMEM_TYPE, + DAXDRV_DEVICE_TYPE, +}; + struct dax_operations { /* * direct_access: translate a device-relative -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:37 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Explain the binding process for sysram and daxdev regions which are explicit about which dax driver to use during region creation. Jonathan Corbet <corbet@lwn.net> Signed-off-by: Gregory Price <gourry@gourry.net> --- .../driver-api/cxl/linux/cxl-driver.rst | 43 +++++++++++++++++++ .../driver-api/cxl/linux/dax-driver.rst | 29 +++++++++++++ 2 files changed, 72 insertions(+) diff --git a/Documentation/driver-api/cxl/linux/cxl-driver.rst b/Documentation/driver-api/cxl/linux/cxl-driver.rst index dd6dd17dc536..1f857345e896 100644 --- a/Documentation/driver-api/cxl/linux/cxl-driver.rst +++ b/Documentation/driver-api/cxl/linux/cxl-driver.rst @@ -445,6 +445,49 @@ for more details. :: dax0.0 devtype modalias uevent dax_region driver subsystem +DAX regions are created when a CXL RAM region is bound to one of the +following drivers: + +* :code:`cxl_devdax_region` - Creates a dax_region for device_dax mode. + The resulting DAX device provides direct userspace access via + :code:`/dev/daxN.Y`. + +* :code:`cxl_dax_kmem_region` - Creates a dax_region for kmem mode via a + sysram_region intermediate device. See `Sysram Region`_ below. + +Sysram Region +~~~~~~~~~~~~~ +A `Sysram Region` is an intermediate device between a CXL `Memory Region` +and a `DAX Region` for kmem mode. It is created when a CXL RAM region is +bound to the :code:`cxl_sysram_region` driver. + +The sysram_region device provides an interposition point where users can +configure memory hotplug policy before the underlying dax_region is created +and memory is hotplugged to the system. + +The device hierarchy for kmem mode is:: + + regionX -> sysram_regionX -> dax_regionX -> daxX.Y + +The sysram_region exposes an :code:`online_type` attribute that controls +how memory will be onlined when the dax_kmem driver binds: + +* :code:`invalid` - Not configured (default). Blocks driver binding. +* :code:`offline` - Memory will not be onlined automatically. +* :code:`online` - Memory will be onlined in ZONE_NORMAL. +* :code:`online_movable` - Memory will be onlined in ZONE_MOVABLE. + +Example two-stage binding process:: + + # Bind region to sysram_region driver + echo region0 > /sys/bus/cxl/drivers/cxl_sysram_region/bind + + # Configure memory online type + echo online_movable > /sys/bus/cxl/devices/sysram_region0/online_type + + # Bind sysram_region to dax_kmem_region driver + echo sysram_region0 > /sys/bus/cxl/drivers/cxl_dax_kmem_region/bind + Mailbox Interfaces ------------------ A mailbox command interface for each device is exposed in :: diff --git a/Documentation/driver-api/cxl/linux/dax-driver.rst b/Documentation/driver-api/cxl/linux/dax-driver.rst index 10d953a2167b..2b8e21736292 100644 --- a/Documentation/driver-api/cxl/linux/dax-driver.rst +++ b/Documentation/driver-api/cxl/linux/dax-driver.rst @@ -17,6 +17,35 @@ The DAX subsystem exposes this ability through the `cxl_dax_region` driver. A `dax_region` provides the translation between a CXL `memory_region` and a `DAX Device`. +CXL DAX Region Drivers +====================== +CXL provides multiple drivers for creating DAX regions, each suited for +different use cases: + +cxl_devdax_region +----------------- +The :code:`cxl_devdax_region` driver creates a dax_region configured for +device_dax mode. When a CXL RAM region is bound to this driver, the +resulting DAX device provides direct userspace access via :code:`/dev/daxN.Y`. + +Device hierarchy:: + + regionX -> dax_regionX -> daxX.Y + +This is the simplest path for applications that want to manage CXL memory +directly from userspace. + +cxl_dax_kmem_region +------------------- +For kmem mode, CXL provides a two-stage binding process that allows users +to configure memory hotplug policy before memory is added to the system. + +The :code:`cxl_dax_kmem_region` driver then binds a sysram_region +device and creates a dax_region configured for kmem mode. + +The :code:`online_type` policy will be passed from sysram_region to +the dax kmem driver for use when hotplugging the memory. + DAX Device ========== A `DAX Device` is a file-like interface exposed in :code:`/dev/daxN.Y`. A -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:42 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
In the current kmem driver binding process, the only way for users to define hotplug policy is via a build-time option, or by not onlining memory by default and setting each individual memory block online after hotplug occurs. We can solve this with a configuration step between region-probe and dax-probe. Add the infrastructure for a two-stage driver binding for kmem-mode dax regions. The cxl_dax_kmem_region driver probes cxl_sysram_region devices and creates cxl_dax_region with dax_driver=kmem. This creates an interposition step where users can configure policy. Device hierarchy: region0 -> sysram_region0 -> dax_region0 -> dax0.0 The sysram_region device exposes a sysfs 'online_type' attribute that allows users to configure the memory online type before the underlying dax_region is created and memory is hotplugged. sysram_region0/online_type: invalid: not configured, blocks probe offline: memory will not be onlined automatically online: memory will be onlined in ZONE_NORMAL online_movable: memory will be onlined in ZONE_MMOVABLE The device initializes with online_type=invalid which prevents the cxl_dax_kmem_region driver from binding until the user explicitly configures a valid online_type. This enables a two-step binding process: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind Signed-off-by: Gregory Price <gourry@gourry.net> --- Documentation/ABI/testing/sysfs-bus-cxl | 21 +++ drivers/cxl/core/Makefile | 1 + drivers/cxl/core/core.h | 6 + drivers/cxl/core/dax_region.c | 50 +++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 14 ++ drivers/cxl/core/sysram_region.c | 180 ++++++++++++++++++++++++ drivers/cxl/cxl.h | 25 ++++ 8 files changed, 299 insertions(+) create mode 100644 drivers/cxl/core/sysram_region.c diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl index c80a1b5a03db..a051cb86bdfc 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -624,3 +624,24 @@ Description: The count is persistent across power loss and wraps back to 0 upon overflow. If this file is not present, the device does not have the necessary support for dirty tracking. + + +What: /sys/bus/cxl/devices/sysram_regionZ/online_type +Date: January, 2026 +KernelVersion: v7.1 +Contact: linux-cxl@vger.kernel.org +Description: + (RW) This attribute allows users to configure the memory online + type before the underlying dax_region engages in hotplug. + + Valid values: + 'invalid': Not configured (default). Blocks probe. + 'offline': Memory will not be onlined automatically. + 'online' : Memory will be onlined in ZONE_NORMAL. + 'online_movable': Memory will be onlined in ZONE_MOVABLE. + + The device initializes with online_type='invalid' which prevents + the cxl_dax_kmem_region driver from binding until the user + explicitly configures a valid online_type. This enables a + two-step binding process that gives users control over memory + hotplug policy before memory is added to the system. diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 36f284d7c500..faf662c7d88b 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -18,6 +18,7 @@ cxl_core-y += ras.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o cxl_core-$(CONFIG_CXL_REGION) += dax_region.o +cxl_core-$(CONFIG_CXL_REGION) += sysram_region.o cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o cxl_core-$(CONFIG_CXL_MCE) += mce.o cxl_core-$(CONFIG_CXL_FEATURES) += features.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index ea4df8abc2ad..04b32015e9b1 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -26,6 +26,7 @@ extern struct device_attribute dev_attr_delete_region; extern struct device_attribute dev_attr_region; extern const struct device_type cxl_pmem_region_type; extern const struct device_type cxl_dax_region_type; +extern const struct device_type cxl_sysram_region_type; extern const struct device_type cxl_region_type; int cxl_decoder_detach(struct cxl_region *cxlr, @@ -37,6 +38,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr, #define SET_CXL_REGION_ATTR(x) (&dev_attr_##x.attr), #define CXL_PMEM_REGION_TYPE(x) (&cxl_pmem_region_type) #define CXL_DAX_REGION_TYPE(x) (&cxl_dax_region_type) +#define CXL_SYSRAM_REGION_TYPE(x) (&cxl_sysram_region_type) int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); @@ -44,9 +46,12 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type); +int devm_cxl_add_sysram_region(struct cxl_region *cxlr); int devm_cxl_add_pmem_region(struct cxl_region *cxlr); extern struct cxl_driver cxl_devdax_region_driver; +extern struct cxl_driver cxl_dax_kmem_region_driver; +extern struct cxl_driver cxl_sysram_region_driver; #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, @@ -81,6 +86,7 @@ static inline void cxl_region_exit(void) #define SET_CXL_REGION_ATTR(x) #define CXL_PMEM_REGION_TYPE(x) NULL #define CXL_DAX_REGION_TYPE(x) NULL +#define CXL_SYSRAM_REGION_TYPE(x) NULL #endif struct cxl_send_command; diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c index 391d51e5ec37..a379f5b85e3d 100644 --- a/drivers/cxl/core/dax_region.c +++ b/drivers/cxl/core/dax_region.c @@ -127,3 +127,53 @@ struct cxl_driver cxl_devdax_region_driver = { .probe = cxl_devdax_region_driver_probe, .id = CXL_DEVICE_REGION, }; + +static int cxl_dax_kmem_region_driver_probe(struct device *dev) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + struct cxl_dax_region *cxlr_dax; + struct cxl_region *cxlr; + int rc; + + if (!cxlr_sysram) + return -ENODEV; + + /* Require explicit online_type configuration before binding */ + if (cxlr_sysram->online_type == -1) + return -ENODEV; + + cxlr = cxlr_sysram->cxlr; + + cxlr_dax = cxl_dax_region_alloc(cxlr); + if (IS_ERR(cxlr_dax)) + return PTR_ERR(cxlr_dax); + + /* Inherit online_type from parent sysram_region */ + cxlr_dax->online_type = cxlr_sysram->online_type; + cxlr_dax->dax_driver = DAXDRV_KMEM_TYPE; + + /* Parent is the sysram_region device */ + cxlr_dax->dev.parent = dev; + + rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(&cxlr_dax->dev); + if (rc) + goto err; + + dev_dbg(dev, "%s: register %s\n", dev_name(dev), + dev_name(&cxlr_dax->dev)); + + return devm_add_action_or_reset(dev, cxlr_dax_unregister, cxlr_dax); +err: + put_device(&cxlr_dax->dev); + return rc; +} + +struct cxl_driver cxl_dax_kmem_region_driver = { + .name = "cxl_dax_kmem_region", + .probe = cxl_dax_kmem_region_driver_probe, + .id = CXL_DEVICE_SYSRAM_REGION, +}; diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 3310dbfae9d6..dc7262a5efd6 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -66,6 +66,8 @@ static int cxl_device_id(const struct device *dev) return CXL_DEVICE_PMEM_REGION; if (dev->type == CXL_DAX_REGION_TYPE()) return CXL_DEVICE_DAX_REGION; + if (dev->type == CXL_SYSRAM_REGION_TYPE()) + return CXL_DEVICE_SYSRAM_REGION; if (is_cxl_port(dev)) { if (is_cxl_root(to_cxl_port(dev))) return CXL_DEVICE_ROOT; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 6200ca1cc2dd..8bef91dc726c 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3734,8 +3734,20 @@ int cxl_region_init(void) if (rc) goto err_dax; + rc = cxl_driver_register(&cxl_sysram_region_driver); + if (rc) + goto err_sysram; + + rc = cxl_driver_register(&cxl_dax_kmem_region_driver); + if (rc) + goto err_dax_kmem; + return 0; +err_dax_kmem: + cxl_driver_unregister(&cxl_sysram_region_driver); +err_sysram: + cxl_driver_unregister(&cxl_devdax_region_driver); err_dax: cxl_driver_unregister(&cxl_region_driver); return rc; @@ -3743,6 +3755,8 @@ int cxl_region_init(void) void cxl_region_exit(void) { + cxl_driver_unregister(&cxl_dax_kmem_region_driver); + cxl_driver_unregister(&cxl_sysram_region_driver); cxl_driver_unregister(&cxl_devdax_region_driver); cxl_driver_unregister(&cxl_region_driver); } diff --git a/drivers/cxl/core/sysram_region.c b/drivers/cxl/core/sysram_region.c new file mode 100644 index 000000000000..5665db238d0f --- /dev/null +++ b/drivers/cxl/core/sysram_region.c @@ -0,0 +1,180 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2026 Meta Platforms, Inc. All rights reserved. */ +/* + * CXL Sysram Region - Intermediate device for kmem hotplug configuration + * + * This provides an intermediate device between cxl_region and cxl_dax_region + * that allows users to configure memory hotplug parameters (like online_type) + * before the underlying dax_region is created and memory is hotplugged. + */ + +#include <linux/memory_hotplug.h> +#include <linux/device.h> +#include <linux/slab.h> +#include <cxlmem.h> +#include <cxl.h> +#include "core.h" + +static void cxl_sysram_region_release(struct device *dev) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + kfree(cxlr_sysram); +} + +static ssize_t online_type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + switch (cxlr_sysram->online_type) { + case MMOP_OFFLINE: + return sysfs_emit(buf, "offline\n"); + case MMOP_ONLINE: + return sysfs_emit(buf, "online\n"); + case MMOP_ONLINE_MOVABLE: + return sysfs_emit(buf, "online_movable\n"); + default: + return sysfs_emit(buf, "invalid\n"); + } +} + +static ssize_t online_type_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev); + + if (sysfs_streq(buf, "offline")) + cxlr_sysram->online_type = MMOP_OFFLINE; + else if (sysfs_streq(buf, "online")) + cxlr_sysram->online_type = MMOP_ONLINE; + else if (sysfs_streq(buf, "online_movable")) + cxlr_sysram->online_type = MMOP_ONLINE_MOVABLE; + else + return -EINVAL; + + return len; +} + +static DEVICE_ATTR_RW(online_type); + +static struct attribute *cxl_sysram_region_attrs[] = { + &dev_attr_online_type.attr, + NULL, +}; + +static const struct attribute_group cxl_sysram_region_attribute_group = { + .attrs = cxl_sysram_region_attrs, +}; + +static const struct attribute_group *cxl_sysram_region_attribute_groups[] = { + &cxl_base_attribute_group, + &cxl_sysram_region_attribute_group, + NULL, +}; + +const struct device_type cxl_sysram_region_type = { + .name = "cxl_sysram_region", + .release = cxl_sysram_region_release, + .groups = cxl_sysram_region_attribute_groups, +}; + +static bool is_cxl_sysram_region(struct device *dev) +{ + return dev->type == &cxl_sysram_region_type; +} + +struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev) +{ + if (dev_WARN_ONCE(dev, !is_cxl_sysram_region(dev), + "not a cxl_sysram_region device\n")) + return NULL; + return container_of(dev, struct cxl_sysram_region, dev); +} +EXPORT_SYMBOL_NS_GPL(to_cxl_sysram_region, "CXL"); + +static struct lock_class_key cxl_sysram_region_key; + +static struct cxl_sysram_region *cxl_sysram_region_alloc(struct cxl_region *cxlr) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_sysram_region *cxlr_sysram; + struct device *dev; + + guard(rwsem_read)(&cxl_rwsem.region); + if (p->state != CXL_CONFIG_COMMIT) + return ERR_PTR(-ENXIO); + + cxlr_sysram = kzalloc(sizeof(*cxlr_sysram), GFP_KERNEL); + if (!cxlr_sysram) + return ERR_PTR(-ENOMEM); + + cxlr_sysram->hpa_range.start = p->res->start; + cxlr_sysram->hpa_range.end = p->res->end; + cxlr_sysram->online_type = -1; /* Require explicit configuration */ + + dev = &cxlr_sysram->dev; + cxlr_sysram->cxlr = cxlr; + device_initialize(dev); + lockdep_set_class(&dev->mutex, &cxl_sysram_region_key); + device_set_pm_not_required(dev); + dev->parent = &cxlr->dev; + dev->bus = &cxl_bus_type; + dev->type = &cxl_sysram_region_type; + + return cxlr_sysram; +} + +static void cxlr_sysram_unregister(void *_cxlr_sysram) +{ + struct cxl_sysram_region *cxlr_sysram = _cxlr_sysram; + + device_unregister(&cxlr_sysram->dev); +} + +int devm_cxl_add_sysram_region(struct cxl_region *cxlr) +{ + struct cxl_sysram_region *cxlr_sysram; + struct device *dev; + int rc; + + cxlr_sysram = cxl_sysram_region_alloc(cxlr); + if (IS_ERR(cxlr_sysram)) + return PTR_ERR(cxlr_sysram); + + dev = &cxlr_sysram->dev; + rc = dev_set_name(dev, "sysram_region%d", cxlr->id); + if (rc) + goto err; + + rc = device_add(dev); + if (rc) + goto err; + + dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), + dev_name(dev)); + + return devm_add_action_or_reset(&cxlr->dev, cxlr_sysram_unregister, + cxlr_sysram); +err: + put_device(dev); + return rc; +} + +static int cxl_sysram_region_driver_probe(struct device *dev) +{ + struct cxl_region *cxlr = to_cxl_region(dev); + + /* Only handle RAM regions */ + if (cxlr->mode != CXL_PARTMODE_RAM) + return -ENODEV; + + return devm_cxl_add_sysram_region(cxlr); +} + +struct cxl_driver cxl_sysram_region_driver = { + .name = "cxl_sysram_region", + .probe = cxl_sysram_region_driver_probe, + .id = CXL_DEVICE_REGION, +}; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 674d5f870c70..1544c27e9c89 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -596,6 +596,25 @@ struct cxl_dax_region { enum dax_driver_type dax_driver; }; +/** + * struct cxl_sysram_region - CXL RAM region for system memory hotplug + * @dev: device for this sysram_region + * @cxlr: parent cxl_region + * @hpa_range: Host physical address range for the region + * @online_type: Memory online type (MMOP_* 0-3, or -1 if not configured) + * + * Intermediate device that allows configuration of memory hotplug + * parameters before the underlying dax_region is created. The device + * starts with online_type=-1 which prevents the cxl_dax_kmem_region + * driver from binding until the user explicitly sets online_type. + */ +struct cxl_sysram_region { + struct device dev; + struct cxl_region *cxlr; + struct range hpa_range; + int online_type; +}; + /** * struct cxl_port - logical collection of upstream port devices and * downstream port devices to construct a CXL memory @@ -890,6 +909,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv); #define CXL_DEVICE_PMEM_REGION 7 #define CXL_DEVICE_DAX_REGION 8 #define CXL_DEVICE_PMU 9 +#define CXL_DEVICE_SYSRAM_REGION 10 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") #define CXL_MODALIAS_FMT "cxl:t%d" @@ -907,6 +927,7 @@ bool is_cxl_pmem_region(struct device *dev); struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev); int cxl_add_to_region(struct cxl_endpoint_decoder *cxled); struct cxl_dax_region *to_cxl_dax_region(struct device *dev); +struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev); u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa); #else static inline bool is_cxl_pmem_region(struct device *dev) @@ -925,6 +946,10 @@ static inline struct cxl_dax_region *to_cxl_dax_region(struct device *dev) { return NULL; } +static inline struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev) +{ + return NULL; +} static inline u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa) { -- 2.52.0
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:04:41 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
Annoyingly, my email client has been truncating my titles: cxl: explicit DAX driver selection and hotplug policy for CXL regions ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Thu, 29 Jan 2026 16:17:55 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, Jan 29, 2026 at 04:04:33PM -0500, Gregory Price wrote: Looks like build regression on configs without hotplug MMOP_ defines and mhp_get_default_online_type() undefined Will let this version sit for a bit before spinning a v2 ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 30 Jan 2026 12:34:33 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On 1/29/2026 3:04 PM, Gregory Price wrote: This technically comes up in the devdax_region driver patch first, but I noticed it here so this is where I'm putting it: I like the idea here, but the implementation is all off. Firstly, devm_cxl_add_sysram_region() is never called outside of sysram_region_driver::probe(), so I'm not sure how they ever get added to the system (same with devdax regions). Second, there's this weird pattern of adding sub-region (sysram, devdax, etc.) devices being added inside of the sub-region driver probe. I would expect the devices are added then the probe function is called. What I think should be going on here (and correct me if I'm wrong) is: 1. a cxl_region device is added to the system 2. cxl_region::probe() is called on said device (one in cxl/core/region.c) 3. Said probe function figures out the device is a dax_region or whatever else and creates that type of region device (i.e. cxl_region::probe() -> device_add(&cxl_sysram_device)) 4. if the device's dax driver type is DAXDRV_DEVICE_TYPE it gets sent to the daxdev_region driver 5a. if the device's dax driver type is DAXDRV_KMEM_TYPE it gets sent to the sysram_region driver which holds it until the online_type is set 5b. Once the online_type is set, the device is forwarded to the dax_kmem_region driver? Not sure on this part What seems to be happening is that the cxl_region is added, all of these region drivers try to bind to it since they all use the same device id (CXL_DEVICE_REGION) and the correct one is figured out by magic? I'm somewhat confused at this point :/. This should be removed from the valid values section since it's not a valid value to write to the attribute. The mention of the default in the paragraph below should be enough. You can use cleanup.h here to remove the goto's (I think). Following should work: #DEFINE_FREE(cxlr_dax_region_put, struct cxl_dax_region *, if (!IS_ERR_OR_NULL(_T)) put_device(&cxlr_dax->dev)) static int cxl_dax_kmem_region_driver_probe(struct device *dev) { ... struct cxl_dax_region *cxlr_dax __free(cxlr_dax_region_put) = cxl_dax_region_alloc(cxlr); if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); ... rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id); if (rc) return rc; rc = device_add(&cxlr_dax->dev); if (rc) return rc; dev_dbg(dev, "%s: register %s\n", dev_name(dev), dev_name(&cxlr_dax->dev)); return devm_add_action_or_reset(dev, cxlr_dax_unregister, no_free_ptr(cxlr_dax)); } Same thing as above Thanks, Ben
{ "author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>", "date": "Fri, 30 Jan 2026 15:27:12 -0600", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Fri, Jan 30, 2026 at 03:27:12PM -0600, Cheatham, Benjamin wrote: I originally tried doing with region0/region_driver, but that design pattern is also confusing - and it creates differently bad patterns. echo region0 > decoder0.0/create_ram_region -> creates region0 # Current pattern echo region > driver/region/probe /* auto-region behavior */ # region_driver attribute pattern echo "sysram" > region0/region_driver echo region0 > driver/region/probe /* uses sysram region driver */ https://lore.kernel.org/linux-cxl/20260113202138.3021093-1-gourry@gourry.net/ Ira pointed out that this design makes the "implicit" design of the driver worse. The user doesn't actually know what driver is being used under the hood - it just knows something is being used. This at least makes it explicit which driver is being used - and splits the uses-case logic up into discrete drivers (dax users don't have to worry about sysram users breaking their stuff). If it makes more sense, you could swap the ordering of the names echo region0 > region/bind echo region0 > region_sysram/bind echo region0 > region_daxdev/bind echo region0 > region_dax_kmem/bind echo region0 > region_pony/bind --- The underlying issue is that region::probe() is trying to be a god-function for every possible use case, and hiding the use case behind an attribute vs a driver is not good. (also the default behavior for region::probe() in an otherwise unconfigured region is required for backwards compatibility) For auto-regions: region_probe() eats it and you get the default behavior. For non-auto regions: create_x_region generates an un-configured region and fails to probe until the user commits it and probes it. auto-regions are evil and should be discouraged. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 30 Jan 2026 17:12:50 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On 1/30/2026 4:12 PM, Gregory Price wrote: Ok, that makes sense. I think I just got lost in the sauce while looking at this last week and this explanation helped a lot.> I think this was the source of my misunderstanding. I was trying to understand how it works for auto regions when it's never meant to apply to them. Sorry if this is a stupid question, but what stops auto regions from binding to the sysram/dax region drivers? They all bind to region devices, so I assume there's something keeping them from binding before the core region driver gets a chance. Thanks, Ben
{ "author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>", "date": "Mon, 2 Feb 2026 11:02:37 -0600", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:34 -0500 Gregory Price <gourry@gourry.net> wrote: Trivial comment inline. I don't really care either way. Pushing the policy up to the caller and ensuring it's explicitly constant for all the memory blocks (as opposed to relying on locks) seems sensible to me even without anything else. Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Maybe move the local variable outside the loop to avoid the double call.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:10:29 +0000", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:35 -0500 Gregory Price <gourry@gourry.net> wrote: Hi Gregory, I think maybe I'd have left the export for the first user outside of memory_hotplug.c. Not particularly important however. Maybe talk about why a caller of __add_memory_driver_managed() might want the default? Feels like that's for the people who don't... Or is this all a dance to avoid an if (special mode) __add_memory_driver_managed(); else add_memory_driver_managed(); ? Other comments are mostly about using a named enum. I'm not sure if there is some existing reason why that doesn't work? -Errno pushed through this variable or anything like that? Given online_type values are from an enum anyway, maybe we can name that enum and use it explicitly? Ah. Fair enough, ignore comment in previous patch. I should have read on... It's a little odd to add nice kernel-doc formatted documentation when the non __ variant has free form docs. Maybe tidy that up first if we want to go kernel-doc in this file? (I'm in favor, but no idea on general feelings...) Given that's currently the full set, seems like enum wins out here over an int. This is where using an enum would help compiler know what is going on and maybe warn if anyone writes something that isn't defined.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:25:24 +0000", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 11:02:37AM -0600, Cheatham, Benjamin wrote: Auto regions explicitly use the dax_kmem path (all existing code, unchanged)- which auto-plugs into dax/hotplug. I do get what you're saying that everything binds on a region type, I will look a little closer at this and see if there's something more reasonable we can do. I think i can update `region/bind` to use the sysram driver with online_type=mhp_default_online_type so you'd end up with effective the auto-region logic: cxlcli create-region -m ram ... existing argument set ------ echo region0 > create_ram_region /* program decoders */ echo region0 > region/bind /* * region_bind(): * 1) alloc sysram_region object * 2) sysram_regionN->online_type=mhp_default_online_type() * 3) add device to bus * 4) device auto-probes all the way down to dax * 5) dax auto-onlines with system default setting */ ------ and Non-auto-region logic (approximation) cxlcli creation-region -m ram --type sysram --online-type=movable ----- echo region0 > create_ram_region /* program decoders */ echo region0 > sysram/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > dax_kmem/bind ----- I want to retain the dax_kmem driver because there may be multiple users other than sysram. For example, a compressed memory region wants to utilize dax_kmem, but has its own complex policy (via N_MEMORY_PRIVATE) so it doesn't want to abstract through sysram_region, but it does want to abstract through dax_kmem. weeeee "software defined memory" weeeee ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 12:41:31 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 05:10:29PM +0000, Jonathan Cameron wrote: ack. will update for next version w/ Ben's notes and the build fix. Thanks! ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 12:46:25 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:37 -0500 Gregory Price <gourry@gourry.net> wrote: LGTM Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:54:17 +0000", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:38 -0500 Gregory Price <gourry@gourry.net> wrote: Needs to answer the question: Why? Minor stuff inline. Maybe sneak in dropping that trailing comma whilst you are moving it. ... Bonus line...
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:56:40 +0000", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:39 -0500 Gregory Price <gourry@gourry.net> wrote: Likewise. Why?
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 17:57:11 +0000", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Mon, Feb 02, 2026 at 05:25:24PM +0000, Jonathan Cameron wrote: Less about why they want the default, more about maintaining backward compatibility. In the cxl driver, Ben pointed out something that made me realize we can change `region/bind()` to actually use the new `sysram/bind` path by just adding a one line `sysram_regionN->online_type = default()` I can add this detail to the changelog. I can add a cleanup-patch prior to use the enum, but i don't think this actually enables the compiler to do anything new at the moment? An enum just resolves to an int, and setting `enum thing val = -1` when the enum definition doesn't include -1 doesn't actually fire any errors (at least IIRC - maybe i'm just wrong). Same with function(enum) -> function(-1) wouldn't fire a compilation error It might actually be worth adding `MMOP_NOT_CONFIGURED = -1` so that the cxl-sysram driver can set this explicitly rather than just setting -1 as an implicit version of this - but then why would memory_hotplug.c ever want to expose a NOT_CONFIGURED option lol. So, yeah, the enum looks nicer, but not sure how much it buys us beyond that. ack. Can add some more cleanups early in the series. I think you still have to sanity check this, but maybe the code looks cleaner, so will do. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 2 Feb 2026 13:02:10 -0500", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
Currently, CXL regions that create DAX devices have no mechanism to control select the hotplug online policy for kmem regions at region creation time. Users must either rely on a build-time default or manually configure each memory block after hotplug occurs. Additionally, there is no explicit way to choose between device_dax and dax_kmem modes at region creation time - regions default to kmem. This series addresses both issues by: 1. Plumbing an online_type parameter through the memory hotplug path, from mm/memory_hotplug through the DAX layer, enabling drivers to specify the desired policy (offline, online, online_movable). 2. Adding infrastructure for explicit dax driver selection (kmem vs device) when creating CXL DAX regions. 3. Introducing new CXL region drivers that provide a two-stage binding process with user-configurable policy between region creation and memory hotplug. The new drivers are: - cxl_devdax_region: Creates dax_regions that bind to device_dax driver - cxl_sysram_region: Creates sysram_region devices with hotplug policy - cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions The sysram_region device exposes an 'online_type' sysfs attribute allowing users to configure the memory online type before hotplug: echo region0 > cxl_sysram_region/bind echo online_movable > sysram_region0/online_type echo sysram_region0 > cxl_dax_kmem_region/bind This enables explicit control over both the dax driver mode and the memory hotplug policy for CXL memory regions. In the future, with DCD regions, this will also provide a policy step which dictates how extents will be surfaces and managed (e.g. if the dc region is bound to the sysram driver, it will surface as system memory, while the devdax driver will surface extents as new devdax). Gregory Price (9): mm/memory_hotplug: pass online_type to online_memory_block() via arg mm/memory_hotplug: add __add_memory_driver_managed() with online_type arg dax: plumb online_type from dax_kmem creators to hotplug drivers/cxl,dax: add dax driver mode selection for dax regions cxl/core/region: move pmem region driver logic into pmem_region cxl/core/region: move dax region device logic into dax_region.c cxl/core: add cxl_devdax_region driver for explicit userland region binding cxl/core: Add dax_kmem_region and sysram_region drivers Documentation/driver-api/cxl: add dax and sysram driver documentation Documentation/ABI/testing/sysfs-bus-cxl | 21 ++ .../driver-api/cxl/linux/cxl-driver.rst | 43 +++ .../driver-api/cxl/linux/dax-driver.rst | 29 ++ drivers/cxl/core/Makefile | 3 + drivers/cxl/core/core.h | 11 + drivers/cxl/core/dax_region.c | 179 ++++++++++ drivers/cxl/core/pmem_region.c | 191 +++++++++++ drivers/cxl/core/port.c | 2 + drivers/cxl/core/region.c | 321 ++---------------- drivers/cxl/core/sysram_region.c | 180 ++++++++++ drivers/cxl/cxl.h | 29 ++ drivers/dax/bus.c | 3 + drivers/dax/bus.h | 7 +- drivers/dax/cxl.c | 7 +- drivers/dax/dax-private.h | 2 + drivers/dax/hmem/hmem.c | 2 + drivers/dax/kmem.c | 13 +- drivers/dax/pmem.c | 2 + include/linux/dax.h | 5 + include/linux/memory_hotplug.h | 3 + mm/memory_hotplug.c | 95 ++++-- 21 files changed, 826 insertions(+), 322 deletions(-) create mode 100644 drivers/cxl/core/dax_region.c create mode 100644 drivers/cxl/core/pmem_region.c create mode 100644 drivers/cxl/core/sysram_region.c -- 2.52.0
On Thu, 29 Jan 2026 16:04:41 -0500 Gregory Price <gourry@gourry.net> wrote: ZONE_MOVABLE Trivial stuff. Will mull over this series as a whole... My first instinctive reaction is positive - I'm just wondering where additional drivers fit into this and whether it has the right degree of flexibility. This smells like a loop over an array of drivers is becoming sensible. As below. Trivial, but don't want a comma on that NULL. Ah. An there's our reason for an int. Can we just add a MMOP enum value for not configured yet and so let us use it as an enum? Or have a separate bool for that and ignore the online_type until it's set.
{ "author": "Jonathan Cameron <jonathan.cameron@huawei.com>", "date": "Mon, 2 Feb 2026 18:20:15 +0000", "thread_id": "20260202175640.00003ef5@huawei.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
On Mon, Feb 02, 2026 at 12:03:11PM +0200, Bogdan Sandu wrote: Was this an AI generated patch? Either way, it needs to be properly broken up into "one logical change per patch" like all others. thanks, greg k-h
{ "author": "Greg KH <gregkh@linuxfoundation.org>", "date": "Mon, 2 Feb 2026 11:14:26 +0100", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
I can assure you, it is not AI-generated. per patch" like all others. Understood. I'll resend it afterwards. Thank you for your patience.
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 12:18:43 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 12:18:44 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
On Mon, Feb 02, 2026 at 12:18:44PM +0200, Bogdan Sandu wrote: You resent the same thing again? confused, greg k-h
{ "author": "Greg KH <gregkh@linuxfoundation.org>", "date": "Mon, 2 Feb 2026 11:32:15 +0100", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
The previous patch has now been separated into four smaller ones, each one fixing a specific type of checkpatch.pl issue. Bogdan Sandu (4): media: ipu3: fix alignment media: ipu3: use tabs media: ipu3: avoid ending lines with paranthesis media: ipu3: use BIT() drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) -- 2.51.0
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 19:50:29 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
Fix alignment with parentheses. Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 22 +++++++++++----------- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 +++++------ drivers/staging/media/ipu3/ipu3.c | 4 ++-- 3 files changed, 18 insertions(+), 19 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..145501e90 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1241,6 +1242,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1250,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1270,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2036,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2169,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..c33186208 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -774,7 +774,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 19:50:30 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
Use tabs instead of spaces. Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 145501e90..e990eb5b3 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -1034,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) -- 2.51.0
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 19:50:31 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
Don't end line with paranthesis. Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 13 +++++-------- drivers/staging/media/ipu3/ipu3.c | 3 +-- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index e990eb5b3..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -1226,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index c33186208..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } -- 2.51.0
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 19:50:32 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
Prefer BIT() macro over manual bitshift. Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry -- 2.51.0
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 19:50:33 +0200", "thread_id": "20260202175033.8640-2-bogdanelsandu2011@gmail.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Extend the mTHP (multi-size THP) statistics infrastructure to support PUD-sized transparent huge pages. The mTHP framework tracks statistics for each supported THP size through per-order counters exposed via sysfs. To add PUD THP support, PUD_ORDER must be included in the set of tracked orders. With this change, PUD THP events (allocations, faults, splits, swaps) are tracked and exposed through the existing sysfs interface at /sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/stats/. This provides visibility into PUD THP behavior for debugging and performance analysis. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- include/linux/huge_mm.h | 42 +++++++++++++++++++++++++++++++++++++---- mm/huge_memory.c | 3 ++- 2 files changed, 40 insertions(+), 5 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e672e45bb9cc7..5509ba8555b6e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -76,7 +76,13 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 * (which is a limitation of the THP implementation). */ -#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +#define THP_ORDERS_ALL_ANON_PUD BIT(PUD_ORDER) +#else +#define THP_ORDERS_ALL_ANON_PUD 0 +#endif +#define THP_ORDERS_ALL_ANON (((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) | \ + THP_ORDERS_ALL_ANON_PUD) /* * Mask of all large folio orders supported for file THP. Folios in a DAX @@ -146,18 +152,46 @@ enum mthp_stat_item { }; #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS) + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +#define MTHP_STAT_COUNT (PMD_ORDER + 2) +#define MTHP_STAT_PUD_INDEX (PMD_ORDER + 1) /* PUD uses last index */ +#else +#define MTHP_STAT_COUNT (PMD_ORDER + 1) +#endif + struct mthp_stat { - unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT]; + unsigned long stats[MTHP_STAT_COUNT][__MTHP_STAT_COUNT]; }; DECLARE_PER_CPU(struct mthp_stat, mthp_stats); +static inline int mthp_stat_order_to_index(int order) +{ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + if (order == PUD_ORDER) + return MTHP_STAT_PUD_INDEX; +#endif + return order; +} + static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta) { - if (order <= 0 || order > PMD_ORDER) + int index; + + if (order <= 0) + return; + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + if (order != PUD_ORDER && order > PMD_ORDER) return; +#else + if (order > PMD_ORDER) + return; +#endif - this_cpu_add(mthp_stats.stats[order][item], delta); + index = mthp_stat_order_to_index(order); + this_cpu_add(mthp_stats.stats[index][item], delta); } static inline void count_mthp_stat(int order, enum mthp_stat_item item) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3128b3beedb0a..d033624d7e1f2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -598,11 +598,12 @@ static unsigned long sum_mthp_stat(int order, enum mthp_stat_item item) { unsigned long sum = 0; int cpu; + int index = mthp_stat_order_to_index(order); for_each_possible_cpu(cpu) { struct mthp_stat *this = &per_cpu(mthp_stats, cpu); - sum += this->stats[order][item]; + sum += this->stats[index][item]; } return sum; -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:19 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
For page table management, PUD THPs need to pre-deposit page tables that will be used when the huge page is later split. When a PUD THP is allocated, we cannot know in advance when or why it might need to be split (COW, partial unmap, reclaim), but we need page tables ready for that eventuality. Similar to how PMD THPs deposit a single PTE table, PUD THPs deposit a PMD table which itself contains deposited PTE tables - a two-level deposit. This commit adds the deposit/withdraw infrastructure and a new pud_huge_pmd field in ptdesc to store the deposited PMD. The deposited PMD tables are stored as a singly-linked stack using only page->lru.next as the link pointer. A doubly-linked list using the standard list_head mechanism would cause memory corruption: list_del() poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD tables have their own deposited PTE tables stored in pmd_huge_pte, poisoning lru.prev would corrupt the PTE table list and cause crashes when withdrawing PTE tables during split. PMD THPs don't have this problem because their deposited PTE tables don't have sub-deposits. Using only lru.next avoids the overlap entirely. For reverse mapping, PUD THPs need the same rmap support that PMD THPs have. The page_vma_mapped_walk() function is extended to recognize and handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD flag tells the unmap path to split PUD THPs before proceeding, since there is no PUD-level migration entry format - the split converts the single PUD mapping into individual PTE mappings that can be migrated or swapped normally. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- include/linux/huge_mm.h | 5 +++ include/linux/mm.h | 19 ++++++++ include/linux/mm_types.h | 5 ++- include/linux/pgtable.h | 8 ++++ include/linux/rmap.h | 7 ++- mm/huge_memory.c | 8 ++++ mm/internal.h | 3 ++ mm/page_vma_mapped.c | 35 +++++++++++++++ mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++ mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++--- 10 files changed, 260 insertions(+), 9 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a4d9f964dfdea..e672e45bb9cc7 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, unsigned long address); #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, + unsigned long address); int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pudp, unsigned long addr, pgprot_t newprot, unsigned long cp_flags); #else +static inline void +split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, + unsigned long address) {} static inline int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pudp, unsigned long addr, pgprot_t newprot, diff --git a/include/linux/mm.h b/include/linux/mm.h index ab2e7e30aef96..a15e18df0f771 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm, * considered ready to switch to split PUD locks yet; there may be places * which need to be converted from page_table_lock. */ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static inline struct page *pud_pgtable_page(pud_t *pud) +{ + unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1); + + return virt_to_page((void *)((unsigned long)pud & mask)); +} + +static inline struct ptdesc *pud_ptdesc(pud_t *pud) +{ + return page_ptdesc(pud_pgtable_page(pud)); +} + +#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd) +#endif + static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud) { return &mm->page_table_lock; @@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud) static inline void pagetable_pud_ctor(struct ptdesc *ptdesc) { __pagetable_ctor(ptdesc); +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + ptdesc->pud_huge_pmd = NULL; +#endif } static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 78950eb8926dc..26a38490ae2e1 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -577,7 +577,10 @@ struct ptdesc { struct list_head pt_list; struct { unsigned long _pt_pad_1; - pgtable_t pmd_huge_pte; + union { + pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */ + pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */ + }; }; }; unsigned long __page_mapping; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 2f0dd3a4ace1a..3ce733c1d71a2 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); #define arch_needs_pgtable_deposit() (false) #endif +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp, + pmd_t *pmd_table); +extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp); +extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable); +extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table); +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE /* * This is an implementation of pmdp_establish() that is only suitable for an diff --git a/include/linux/rmap.h b/include/linux/rmap.h index daa92a58585d9..08cd0a0eb8763 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -101,6 +101,7 @@ enum ttu_flags { * do a final flush if necessary */ TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock: * caller holds it */ + TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */ }; #ifdef CONFIG_MMU @@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages, folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags) void folio_add_anon_rmap_pmd(struct folio *, struct page *, struct vm_area_struct *, unsigned long address, rmap_t flags); +void folio_add_anon_rmap_pud(struct folio *, struct page *, + struct vm_area_struct *, unsigned long address, rmap_t flags); void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address, rmap_t flags); void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages, @@ -933,6 +936,7 @@ struct page_vma_mapped_walk { pgoff_t pgoff; struct vm_area_struct *vma; unsigned long address; + pud_t *pud; pmd_t *pmd; pte_t *pte; spinlock_t *ptl; @@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) static inline void page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) { - WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte); + WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte); if (likely(pvmw->ptl)) spin_unlock(pvmw->ptl); @@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) WARN_ON_ONCE(1); pvmw->ptl = NULL; + pvmw->pud = NULL; pvmw->pmd = NULL; pvmw->pte = NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 40cf59301c21a..3128b3beedb0a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } + +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, + unsigned long address) +{ + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE)); + if (pud_trans_huge(*pud)) + __split_huge_pud_locked(vma, pud, address); +} #else void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, unsigned long address) diff --git a/mm/internal.h b/mm/internal.h index 9ee336aa03656..21d5c00f638dc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf, * in mm/rmap.c: */ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address); +#endif /* * in mm/page_alloc.c diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index b38a1d00c971b..d31eafba38041 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw) return true; } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +/* Returns true if the two ranges overlap. Careful to not overflow. */ +static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw) +{ + if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn) + return false; + if (pfn > pvmw->pfn + pvmw->nr_pages - 1) + return false; + return true; +} +#endif + static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) { pvmw->address = (pvmw->address + size) & ~(size - 1); @@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) pud_t *pud; pmd_t pmde; + /* The only possible pud mapping has been handled on last iteration */ + if (pvmw->pud && !pvmw->pmd) + return not_found(pvmw); + /* The only possible pmd mapping has been handled on last iteration */ if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); @@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) continue; } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + /* Check for PUD-mapped THP */ + if (pud_trans_huge(*pud)) { + pvmw->pud = pud; + pvmw->ptl = pud_lock(mm, pud); + if (likely(pud_trans_huge(*pud))) { + if (pvmw->flags & PVMW_MIGRATION) + return not_found(pvmw); + if (!check_pud(pud_pfn(*pud), pvmw)) + return not_found(pvmw); + return true; + } + /* PUD was split under us, retry at PMD level */ + spin_unlock(pvmw->ptl); + pvmw->ptl = NULL; + pvmw->pud = NULL; + } +#endif + pvmw->pmd = pmd_offset(pud, pvmw->address); /* * Make sure the pmd value isn't cached in a register by the diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index d3aec7a9926ad..2047558ddcd79 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) } #endif +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +/* + * Deposit page tables for PUD THP. + * Called with PUD lock held. Stores PMD tables in a singly-linked stack + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer. + * + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full + * list_head. This is because lru.prev (offset 16) overlaps with + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables. + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2. + * + * PTE tables should be deposited into the PMD using pud_deposit_pte(). + */ +void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp, + pmd_t *pmd_table) +{ + pgtable_t pmd_page = virt_to_page(pmd_table); + + assert_spin_locked(pud_lockptr(mm, pudp)); + + /* Push onto stack using only lru.next as the link */ + pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp); + pud_huge_pmd(pudp) = pmd_page; +} + +/* + * Withdraw the deposited PMD table for PUD THP split or zap. + * Called with PUD lock held. + * Returns NULL if no more PMD tables are deposited. + */ +pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp) +{ + pgtable_t pmd_page; + + assert_spin_locked(pud_lockptr(mm, pudp)); + + pmd_page = pud_huge_pmd(pudp); + if (!pmd_page) + return NULL; + + /* Pop from stack - lru.next points to next PMD page (or NULL) */ + pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next; + + return page_address(pmd_page); +} + +/* + * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy). + * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list. + * No lock assertion since the PMD isn't visible yet. + */ +void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table); + + /* FIFO - add to front of list */ + if (!ptdesc->pmd_huge_pte) + INIT_LIST_HEAD(&pgtable->lru); + else + list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru); + ptdesc->pmd_huge_pte = pgtable; +} + +/* + * Withdraw a PTE table from a standalone PMD table. + * Returns NULL if no more PTE tables are deposited. + */ +pgtable_t pud_withdraw_pte(pmd_t *pmd_table) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table); + pgtable_t pgtable; + + pgtable = ptdesc->pmd_huge_pte; + if (!pgtable) + return NULL; + ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru, + struct page, lru); + if (ptdesc->pmd_huge_pte) + list_del(&pgtable->lru); + return pgtable; +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + #ifndef __HAVE_ARCH_PMDP_INVALIDATE pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) diff --git a/mm/rmap.c b/mm/rmap.c index 7b9879ef442d9..69acabd763da4 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) return pmd; } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +/* + * Returns the actual pud_t* where we expect 'address' to be mapped from, or + * NULL if it doesn't exist. No guarantees / checks on what the pud_t* + * represents. + */ +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud = NULL; + + pgd = pgd_offset(mm, address); + if (!pgd_present(*pgd)) + goto out; + + p4d = p4d_offset(pgd, address); + if (!p4d_present(*p4d)) + goto out; + + pud = pud_offset(p4d, address); +out: + return pud; +} +#endif + struct folio_referenced_arg { int mapcount; int referenced; @@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, SetPageAnonExclusive(page); break; case PGTABLE_LEVEL_PUD: - /* - * Keep the compiler happy, we don't support anonymous - * PUD mappings. - */ - WARN_ON_ONCE(1); + SetPageAnonExclusive(page); break; default: BUILD_BUG(); @@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page, #endif } +/** + * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @vma: The vm area in which the mapping is added + * @address: The user virtual address of the first page to map + * @flags: The rmap flags + * + * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR) + * + * The caller needs to hold the page table lock, and the page must be locked in + * the anon_vma case: to serialize mapping,index checking after setting. + */ +void folio_add_anon_rmap_pud(struct folio *folio, struct page *page, + struct vm_area_struct *vma, unsigned long address, rmap_t flags) +{ +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) + __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags, + PGTABLE_LEVEL_PUD); +#else + WARN_ON_ONCE(true); +#endif +} + /** * folio_add_new_anon_rmap - Add mapping to a new anonymous folio. * @folio: The folio to add the mapping to. @@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, } if (!pvmw.pte) { + /* + * Check for PUD-mapped THP first. + * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set, + * split the PUD to PMD level and restart the walk. + */ + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) { + if (flags & TTU_SPLIT_HUGE_PUD) { + split_huge_pud_locked(vma, pvmw.pud, pvmw.address); + flags &= ~TTU_SPLIT_HUGE_PUD; + page_vma_mapped_walk_restart(&pvmw); + continue; + } + } + if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) { if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio)) goto walk_done; @@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { + /* Handle PUD-mapped THP first */ + if (!pvmw.pte && !pvmw.pmd) { +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + /* + * PUD-mapped THP: skip migration to preserve the huge + * page. Splitting would defeat the purpose of PUD THPs. + * Return false to indicate migration failure, which + * will cause alloc_contig_range() to try a different + * memory region. + */ + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) { + page_vma_mapped_walk_done(&pvmw); + ret = false; + break; + } +#endif + /* Unexpected state: !pte && !pmd but not a PUD THP */ + page_vma_mapped_walk_done(&pvmw); + break; + } + /* PMD-mapped THP migration entry */ if (!pvmw.pte) { __maybe_unused unsigned long pfn; @@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) /* * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and - * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags. + * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags. */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC | TTU_BATCH_FLUSH))) + TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH))) return; if (folio_is_zone_device(folio) && -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:18 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Implement the split operation that converts a PUD THP mapping into individual PTE mappings. A PUD THP maps 1GB of memory with a single page table entry. When the mapping needs to be broken - for COW, partial unmap, permission changes, or reclaim - it must be split into smaller mappings. Unlike PMD THPs which split into 512 PTEs in a single level, PUD THPs require a two-level split: the single PUD entry becomes 512 PMD entries, each pointing to a PTE table containing 512 PTEs, for a total of 262144 page table entries. The split uses page tables that were pre-deposited when the PUD THP was first allocated. This guarantees the split cannot fail due to memory allocation failure, which is critical since splits often happen under memory pressure during reclaim. The deposited PMD table is installed in the PUD entry, and each PMD slot receives one of the 512 deposited PTE tables. Each PTE is populated to map one 4KB page of the original 1GB folio. Page flags from the original PUD entry (dirty, accessed, writable, soft-dirty) are propagated to each PTE so that no information is lost. The rmap is updated to remove the single PUD-level mapping entry and add 262144 PTE-level mapping entries. The split goes directly to PTE level rather than stopping at PMD level. This is because the kernel's rmap infrastructure assumes that PMD-level mappings are for PMD-sized folios. If we mapped a PUD-sized folio at PMD level (512 PMD entries for one folio), the rmap accounting would break - it would see 512 "large" mappings for a folio that should have far more. Going to PTE level avoids this problem entirely. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- mm/huge_memory.c | 181 ++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 173 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7613caf1e7c30..39b8212b5abd4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3129,12 +3129,82 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, return 1; } +/* + * Structure to hold page tables for PUD split. + * Tables are withdrawn from the pre-deposit made at fault time. + */ +struct pud_split_ptables { + pmd_t *pmd_table; + pgtable_t *pte_tables; /* Array of 512 PTE tables */ + int nr_pte_tables; /* Number of PTE tables in array */ +}; + +/* + * Withdraw pre-deposited page tables from PUD THP. + * Tables are always deposited at fault time in do_huge_pud_anonymous_page(). + * Returns true if successful, false if no tables deposited. + */ +static bool withdraw_pud_split_ptables(struct mm_struct *mm, pud_t *pud, + struct pud_split_ptables *tables) +{ + pmd_t *pmd_table; + pgtable_t pte_table; + int i; + + tables->pmd_table = NULL; + tables->pte_tables = NULL; + tables->nr_pte_tables = 0; + + /* Try to withdraw the deposited PMD table */ + pmd_table = pgtable_trans_huge_pud_withdraw(mm, pud); + if (!pmd_table) + return false; + + tables->pmd_table = pmd_table; + + /* Allocate array to hold PTE table pointers */ + tables->pte_tables = kmalloc_array(NR_PTE_TABLES_FOR_PUD, + sizeof(pgtable_t), GFP_ATOMIC); + if (!tables->pte_tables) + goto fail; + + /* Withdraw PTE tables from the PMD table */ + for (i = 0; i < NR_PTE_TABLES_FOR_PUD; i++) { + pte_table = pud_withdraw_pte(pmd_table); + if (!pte_table) + goto fail; + tables->pte_tables[i] = pte_table; + tables->nr_pte_tables++; + } + + return true; + +fail: + /* Put back any tables we withdrew */ + for (i = 0; i < tables->nr_pte_tables; i++) + pud_deposit_pte(pmd_table, tables->pte_tables[i]); + kfree(tables->pte_tables); + pgtable_trans_huge_pud_deposit(mm, pud, pmd_table); + tables->pmd_table = NULL; + tables->pte_tables = NULL; + tables->nr_pte_tables = 0; + return false; +} + static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, unsigned long haddr) { + bool dirty = false, young = false, write = false; + struct pud_split_ptables tables = { 0 }; + struct mm_struct *mm = vma->vm_mm; + rmap_t rmap_flags = RMAP_NONE; + bool anon_exclusive = false; + bool soft_dirty = false; struct folio *folio; + unsigned long addr; struct page *page; pud_t old_pud; + int i, j; VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); @@ -3145,20 +3215,115 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, old_pud = pudp_huge_clear_flush(vma, haddr, pud); - if (!vma_is_dax(vma)) + if (!vma_is_anonymous(vma)) { + if (!vma_is_dax(vma)) + return; + + page = pud_page(old_pud); + folio = page_folio(page); + + if (!folio_test_dirty(folio) && pud_dirty(old_pud)) + folio_mark_dirty(folio); + if (!folio_test_referenced(folio) && pud_young(old_pud)) + folio_set_referenced(folio); + folio_remove_rmap_pud(folio, page, vma); + folio_put(folio); + add_mm_counter(mm, mm_counter_file(folio), -HPAGE_PUD_NR); return; + } + + /* + * Anonymous PUD split: split directly to PTE level. + * + * We cannot create PMD huge entries pointing to portions of a larger + * folio because the kernel's rmap infrastructure assumes PMD mappings + * are for PMD-sized folios only (see __folio_rmap_sanity_checks). + * Instead, we create a PMD table with 512 entries, each pointing to + * a PTE table with 512 PTEs. + * + * Tables are always deposited at fault time in do_huge_pud_anonymous_page(). + */ + if (!withdraw_pud_split_ptables(mm, pud, &tables)) { + WARN_ON_ONCE(1); + return; + } page = pud_page(old_pud); folio = page_folio(page); - if (!folio_test_dirty(folio) && pud_dirty(old_pud)) - folio_mark_dirty(folio); - if (!folio_test_referenced(folio) && pud_young(old_pud)) - folio_set_referenced(folio); + dirty = pud_dirty(old_pud); + write = pud_write(old_pud); + young = pud_young(old_pud); + soft_dirty = pud_soft_dirty(old_pud); + anon_exclusive = PageAnonExclusive(page); + + if (dirty) + folio_set_dirty(folio); + + /* + * Add references for each page that will have its own PTE. + * Original folio has 1 reference. After split, each of 262144 PTEs + * will eventually be unmapped, each calling folio_put(). + */ + folio_ref_add(folio, HPAGE_PUD_NR - 1); + + /* + * Add PTE-level rmap for all pages at once. + */ + if (anon_exclusive) + rmap_flags |= RMAP_EXCLUSIVE; + folio_add_anon_rmap_ptes(folio, page, HPAGE_PUD_NR, + vma, haddr, rmap_flags); + + /* Remove PUD-level rmap */ folio_remove_rmap_pud(folio, page, vma); - folio_put(folio); - add_mm_counter(vma->vm_mm, mm_counter_file(folio), - -HPAGE_PUD_NR); + + /* + * Create 512 PMD entries, each pointing to a PTE table. + * Each PTE table has 512 PTEs pointing to individual pages. + */ + addr = haddr; + for (i = 0; i < (HPAGE_PUD_NR / HPAGE_PMD_NR); i++) { + pmd_t *pmd_entry = tables.pmd_table + i; + pgtable_t pte_table = tables.pte_tables[i]; + pte_t *pte; + struct page *subpage_base = page + i * HPAGE_PMD_NR; + + /* Populate the PTE table */ + pte = page_address(pte_table); + for (j = 0; j < HPAGE_PMD_NR; j++) { + struct page *subpage = subpage_base + j; + pte_t entry; + + entry = mk_pte(subpage, vma->vm_page_prot); + if (write) + entry = pte_mkwrite(entry, vma); + if (dirty) + entry = pte_mkdirty(entry); + if (young) + entry = pte_mkyoung(entry); + if (soft_dirty) + entry = pte_mksoft_dirty(entry); + + set_pte_at(mm, addr + j * PAGE_SIZE, pte + j, entry); + } + + /* Set PMD to point to PTE table */ + pmd_populate(mm, pmd_entry, pte_table); + addr += HPAGE_PMD_SIZE; + } + + /* + * Memory barrier ensures all PMD entries are visible before + * installing the PMD table in the PUD. + */ + smp_wmb(); + + /* Install the PMD table in the PUD */ + pud_populate(mm, pud, tables.pmd_table); + + /* Free the temporary array holding PTE table pointers */ + kfree(tables.pte_tables); } void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:21 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add the page fault handling path for anonymous PUD THPs, following the same design as the existing PMD THP fault handlers. When a process accesses memory in an anonymous VMA that is PUD-aligned and large enough, the fault handler checks if PUD THP is enabled and attempts to allocate a 1GB folio. The allocation uses folio_alloc_gigantic. If allocation succeeds, the folio is mapped at the faulting PUD entry. Before installing the PUD mapping, page tables are pre-deposited for future use. A PUD THP will eventually need to be split - whether due to copy-on-write after fork, partial munmap, mprotect on a subregion, or memory reclaim. At split time, we need 512 PTE tables (one for each PMD entry) plus the PMD table itself. Allocating 513 page tables during split could fail, leaving the system unable to proceed. By depositing them at fault time when memory pressure is typically lower, we guarantee the split will always succeed. The write-protect fault handler triggers when a process tries to write to a PUD THP that is mapped read-only (typically after fork). Rather than implementing PUD-level COW which would require copying 1GB of data, the handler splits the PUD to PTE level and retries the fault. The retry then handles COW at PTE level, copying only the single 4KB page being written. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- include/linux/huge_mm.h | 2 + mm/huge_memory.c | 260 ++++++++++++++++++++++++++++++++++++++-- mm/memory.c | 8 +- 3 files changed, 258 insertions(+), 12 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5509ba8555b6e..a292035c0270f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -8,6 +8,7 @@ #include <linux/kobject.h> vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); +vm_fault_t do_huge_pud_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); @@ -25,6 +26,7 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) #endif vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf); +vm_fault_t do_huge_pud_wp_page(struct vm_fault *vmf); bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long next); int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d033624d7e1f2..7613caf1e7c30 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1294,6 +1294,70 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma, return folio; } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static struct folio *vma_alloc_anon_folio_pud(struct vm_area_struct *vma, + unsigned long addr) +{ + gfp_t gfp = vma_thp_gfp_mask(vma); + const int order = HPAGE_PUD_ORDER; + struct folio *folio = NULL; + /* + * Contiguous allocation via alloc_contig_range() migrates existing + * pages out of the target range. __GFP_NOMEMALLOC would allow using + * memory reserves for migration destination pages, but THP is an + * optional performance optimization and should not deplete reserves + * that may be needed for critical allocations. Remove it. + * alloc_contig_range_noprof (__alloc_contig_verify_gfp_mask) will + * cause this to fail without it. + */ + gfp_t contig_gfp = gfp & ~__GFP_NOMEMALLOC; + + folio = folio_alloc_gigantic(order, contig_gfp, numa_node_id(), NULL); + + if (unlikely(!folio)) { + count_vm_event(THP_FAULT_FALLBACK); + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); + return NULL; + } + + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { + folio_put(folio); + count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); + return NULL; + } + folio_throttle_swaprate(folio, gfp); + + /* + * When a folio is not zeroed during allocation (__GFP_ZERO not used) + * or user folios require special handling, folio_zero_user() is used to + * make sure that the page corresponding to the faulting address will be + * hot in the cache after zeroing. + */ + if (user_alloc_needs_zeroing()) + folio_zero_user(folio, addr); + /* + * The memory barrier inside __folio_mark_uptodate makes sure that + * folio_zero_user writes become visible before the set_pud_at() + * write. + */ + __folio_mark_uptodate(folio); + + /* + * Set the large_rmappable flag so that the folio can be properly + * removed from the deferred_split list when freed. + * folio_alloc_gigantic() doesn't set this flag (unlike __folio_alloc), + * so we must set it explicitly. + */ + folio_set_large_rmappable(folio); + + return folio; +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd, struct vm_area_struct *vma, unsigned long haddr) { @@ -1318,6 +1382,40 @@ static void map_anon_folio_pmd_pf(struct folio *folio, pmd_t *pmd, count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) +{ + if (likely(vma->vm_flags & VM_WRITE)) + pud = pud_mkwrite(pud); + return pud; +} + +static void map_anon_folio_pud_nopf(struct folio *folio, pud_t *pud, + struct vm_area_struct *vma, unsigned long haddr) +{ + pud_t entry; + + entry = folio_mk_pud(folio, vma->vm_page_prot); + entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); + folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); + folio_add_lru_vma(folio, vma); + set_pud_at(vma->vm_mm, haddr, pud, entry); + update_mmu_cache_pud(vma, haddr, pud); + deferred_split_folio(folio, false); +} + + +static void map_anon_folio_pud_pf(struct folio *folio, pud_t *pud, + struct vm_area_struct *vma, unsigned long haddr) +{ + map_anon_folio_pud_nopf(folio, pud, vma, haddr); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PUD_NR); + count_vm_event(THP_FAULT_ALLOC); + count_mthp_stat(HPAGE_PUD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) { unsigned long haddr = vmf->address & HPAGE_PMD_MASK; @@ -1513,6 +1611,161 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return __do_huge_pmd_anonymous_page(vmf); } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +/* Number of PTE tables needed for PUD THP split: 512 */ +#define NR_PTE_TABLES_FOR_PUD (HPAGE_PUD_NR / HPAGE_PMD_NR) + +/* + * Allocate page tables for PUD THP pre-deposit. + */ +static bool alloc_pud_predeposit_ptables(struct mm_struct *mm, + unsigned long haddr, + pmd_t **pmd_table_out, + int *nr_pte_deposited) +{ + pmd_t *pmd_table; + pgtable_t pte_table; + struct ptdesc *pmd_ptdesc; + int i; + + *pmd_table_out = NULL; + *nr_pte_deposited = 0; + + pmd_table = pmd_alloc_one(mm, haddr); + if (!pmd_table) + return false; + + /* Initialize the pmd_huge_pte field for PTE table storage */ + pmd_ptdesc = virt_to_ptdesc(pmd_table); + pmd_ptdesc->pmd_huge_pte = NULL; + + /* Allocate and deposit 512 PTE tables into the PMD table */ + for (i = 0; i < NR_PTE_TABLES_FOR_PUD; i++) { + pte_table = pte_alloc_one(mm); + if (!pte_table) + goto fail; + pud_deposit_pte(pmd_table, pte_table); + (*nr_pte_deposited)++; + } + + *pmd_table_out = pmd_table; + return true; + +fail: + /* Free any PTE tables we deposited */ + while ((pte_table = pud_withdraw_pte(pmd_table)) != NULL) + pte_free(mm, pte_table); + pmd_free(mm, pmd_table); + return false; +} + +/* + * Free pre-allocated page tables if the PUD THP fault fails. + */ +static void free_pud_predeposit_ptables(struct mm_struct *mm, + pmd_t *pmd_table) +{ + pgtable_t pte_table; + + if (!pmd_table) + return; + + while ((pte_table = pud_withdraw_pte(pmd_table)) != NULL) + pte_free(mm, pte_table); + pmd_free(mm, pmd_table); +} + +vm_fault_t do_huge_pud_anonymous_page(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + unsigned long haddr = vmf->address & HPAGE_PUD_MASK; + struct folio *folio; + pmd_t *pmd_table = NULL; + int nr_pte_deposited = 0; + vm_fault_t ret = 0; + int i; + + /* Check VMA bounds and alignment */ + if (!thp_vma_suitable_order(vma, haddr, PUD_ORDER)) + return VM_FAULT_FALLBACK; + + ret = vmf_anon_prepare(vmf); + if (ret) + return ret; + + folio = vma_alloc_anon_folio_pud(vma, vmf->address); + if (unlikely(!folio)) + return VM_FAULT_FALLBACK; + + /* + * Pre-allocate page tables for future PUD split. + * We need 1 PMD table and 512 PTE tables. + */ + if (!alloc_pud_predeposit_ptables(vma->vm_mm, haddr, + &pmd_table, &nr_pte_deposited)) { + folio_put(folio); + return VM_FAULT_FALLBACK; + } + + vmf->ptl = pud_lock(vma->vm_mm, vmf->pud); + if (unlikely(!pud_none(*vmf->pud))) + goto release; + + ret = check_stable_address_space(vma->vm_mm); + if (ret) + goto release; + + /* Deliver the page fault to userland */ + if (userfaultfd_missing(vma)) { + spin_unlock(vmf->ptl); + folio_put(folio); + free_pud_predeposit_ptables(vma->vm_mm, pmd_table); + ret = handle_userfault(vmf, VM_UFFD_MISSING); + VM_BUG_ON(ret & VM_FAULT_FALLBACK); + return ret; + } + + /* Deposit page tables for future PUD split */ + pgtable_trans_huge_pud_deposit(vma->vm_mm, vmf->pud, pmd_table); + map_anon_folio_pud_pf(folio, vmf->pud, vma, haddr); + mm_inc_nr_pmds(vma->vm_mm); + for (i = 0; i < nr_pte_deposited; i++) + mm_inc_nr_ptes(vma->vm_mm); + spin_unlock(vmf->ptl); + + return 0; +release: + spin_unlock(vmf->ptl); + folio_put(folio); + free_pud_predeposit_ptables(vma->vm_mm, pmd_table); + return ret; +} +#else +vm_fault_t do_huge_pud_anonymous_page(struct vm_fault *vmf) +{ + return VM_FAULT_FALLBACK; +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +vm_fault_t do_huge_pud_wp_page(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + + /* + * For now, split PUD to PTE level on write fault. + * This is the simplest approach for COW handling. + */ + __split_huge_pud(vma, vmf->pud, vmf->address); + return VM_FAULT_FALLBACK; +} +#else +vm_fault_t do_huge_pud_wp_page(struct vm_fault *vmf) +{ + return VM_FAULT_FALLBACK; +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + struct folio_or_pfn { union { struct folio *folio; @@ -1646,13 +1899,6 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio, EXPORT_SYMBOL_GPL(vmf_insert_folio_pmd); #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) -{ - if (likely(vma->vm_flags & VM_WRITE)) - pud = pud_mkwrite(pud); - return pud; -} - static vm_fault_t insert_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, struct folio_or_pfn fop, pgprot_t prot, bool write) { diff --git a/mm/memory.c b/mm/memory.c index 87cf4e1a6f866..e5f86c1d2aded 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6142,9 +6142,9 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf) #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) struct vm_area_struct *vma = vmf->vma; - /* No support for anonymous transparent PUD pages yet */ + if (vma_is_anonymous(vma)) - return VM_FAULT_FALLBACK; + return do_huge_pud_anonymous_page(vmf); if (vma->vm_ops->huge_fault) return vma->vm_ops->huge_fault(vmf, PUD_ORDER); #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -6158,9 +6158,8 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) struct vm_area_struct *vma = vmf->vma; vm_fault_t ret; - /* No support for anonymous transparent PUD pages yet */ if (vma_is_anonymous(vma)) - goto split; + return do_huge_pud_wp_page(vmf); if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { if (vma->vm_ops->huge_fault) { ret = vma->vm_ops->huge_fault(vmf, PUD_ORDER); @@ -6168,7 +6167,6 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) return ret; } } -split: /* COW or write-notify not handled on PUD level: split pud.*/ __split_huge_pud(vma, vmf->pud, vmf->address); #endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:20 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Enable the memory reclaim and migration paths to handle PUD THPs correctly by splitting them before proceeding. Memory reclaim needs to unmap pages before they can be reclaimed. For PUD THPs, the unmap path now passes TTU_SPLIT_HUGE_PUD when unmapping PUD-sized folios. This triggers the PUD split during the unmap phase, converting the single PUD mapping into 262144 PTE mappings. Reclaim then proceeds normally with the individual pages. This follows the same pattern used for PMD THPs with TTU_SPLIT_HUGE_PMD. When migration encounters a PUD-sized folio, it now splits the folio first using the standard folio split mechanism. The resulting smaller folios (or individual pages) can then be migrated normally. This matches how PMD THPs are handled when PMD migration is not supported on a given architecture. The split-before-migrate approach means PUD THPs will be broken up during NUMA balancing or memory compaction. While this loses the TLB benefit of the large mapping, it allows these memory management operations to proceed. Future work could add PUD-level migration entries to preserve the mapping through migration. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- include/linux/huge_mm.h | 11 ++++++ mm/huge_memory.c | 83 +++++++++++++++++++++++++++++++++++++---- mm/migrate.c | 17 +++++++++ mm/vmscan.c | 2 + 4 files changed, 105 insertions(+), 8 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a292035c0270f..8b2bffda4b4f3 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -559,6 +559,17 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) return folio_order(folio) >= HPAGE_PMD_ORDER; } +/** + * folio_test_pud_mappable - Can we map this folio with a PUD? + * @folio: The folio to test + * + * Return: true - @folio can be PUD-mapped, false - @folio cannot be PUD-mapped. + */ +static inline bool folio_test_pud_mappable(struct folio *folio) +{ + return folio_order(folio) >= HPAGE_PUD_ORDER; +} + vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 39b8212b5abd4..87b2c21df4a49 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2228,9 +2228,17 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, goto out_unlock; /* - * TODO: once we support anonymous pages, use - * folio_try_dup_anon_rmap_*() and split if duplicating fails. + * For anonymous pages, split to PTE level. + * This simplifies fork handling - we don't need to duplicate + * the complex anon rmap at PUD level. */ + if (vma_is_anonymous(vma)) { + spin_unlock(src_ptl); + spin_unlock(dst_ptl); + __split_huge_pud(vma, src_pud, addr); + return -EAGAIN; + } + if (is_cow_mapping(vma->vm_flags) && pud_write(pud)) { pudp_set_wrprotect(src_mm, addr, src_pud); pud = pud_wrprotect(pud); @@ -3099,11 +3107,29 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, { spinlock_t *ptl; pud_t orig_pud; + pmd_t *pmd_table; + pgtable_t pte_table; + int nr_pte_tables = 0; ptl = __pud_trans_huge_lock(pud, vma); if (!ptl) return 0; + /* + * Withdraw any deposited page tables before clearing the PUD. + * These need to be freed and their counters decremented. + */ + pmd_table = pgtable_trans_huge_pud_withdraw(tlb->mm, pud); + if (pmd_table) { + while ((pte_table = pud_withdraw_pte(pmd_table)) != NULL) { + pte_free(tlb->mm, pte_table); + mm_dec_nr_ptes(tlb->mm); + nr_pte_tables++; + } + pmd_free(tlb->mm, pmd_table); + mm_dec_nr_pmds(tlb->mm); + } + orig_pud = pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm); arch_check_zapped_pud(vma, orig_pud); tlb_remove_pud_tlb_entry(tlb, pud, addr); @@ -3114,14 +3140,15 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, struct page *page = NULL; struct folio *folio; - /* No support for anonymous PUD pages or migration yet */ - VM_WARN_ON_ONCE(vma_is_anonymous(vma) || - !pud_present(orig_pud)); + VM_WARN_ON_ONCE(!pud_present(orig_pud)); page = pud_page(orig_pud); folio = page_folio(page); folio_remove_rmap_pud(folio, page, vma); - add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR); + if (vma_is_anonymous(vma)) + add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PUD_NR); + else + add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR); spin_unlock(ptl); tlb_remove_page_size(tlb, page, HPAGE_PUD_SIZE); @@ -3729,15 +3756,53 @@ static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned split_huge_pmd_address(vma, address, false); } +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +static void split_huge_pud_address(struct vm_area_struct *vma, unsigned long address) +{ + pud_t *pud = mm_find_pud(vma->vm_mm, address); + + if (!pud) + return; + + __split_huge_pud(vma, pud, address); +} + +static inline void split_huge_pud_if_needed(struct vm_area_struct *vma, unsigned long address) +{ + /* + * If the new address isn't PUD-aligned and it could previously + * contain a PUD huge page: check if we need to split it. + */ + if (!IS_ALIGNED(address, HPAGE_PUD_SIZE) && + range_in_vma(vma, ALIGN_DOWN(address, HPAGE_PUD_SIZE), + ALIGN(address, HPAGE_PUD_SIZE))) + split_huge_pud_address(vma, address); +} +#else +static inline void split_huge_pud_if_needed(struct vm_area_struct *vma, unsigned long address) +{ +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct vm_area_struct *next) { - /* Check if we need to split start first. */ + /* Check if we need to split PUD THP at start first. */ + split_huge_pud_if_needed(vma, start); + + /* Check if we need to split PUD THP at end. */ + split_huge_pud_if_needed(vma, end); + + /* If we're incrementing next->vm_start, we might need to split it. */ + if (next) + split_huge_pud_if_needed(next, end); + + /* Check if we need to split PMD THP at start. */ split_huge_pmd_if_needed(vma, start); - /* Check if we need to split end next. */ + /* Check if we need to split PMD THP at end. */ split_huge_pmd_if_needed(vma, end); /* If we're incrementing next->vm_start, we might need to split it. */ @@ -3752,6 +3817,8 @@ static void unmap_folio(struct folio *folio) VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (folio_test_pud_mappable(folio)) + ttu_flags |= TTU_SPLIT_HUGE_PUD; if (folio_test_pmd_mappable(folio)) ttu_flags |= TTU_SPLIT_HUGE_PMD; diff --git a/mm/migrate.c b/mm/migrate.c index 4688b9e38cd2f..2d3d2f5585d14 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1859,6 +1859,23 @@ static int migrate_pages_batch(struct list_head *from, * we will migrate them after the rest of the * list is processed. */ + /* + * PUD-sized folios cannot be migrated directly, + * but can be split. Try to split them first and + * migrate the resulting smaller folios. + */ + if (folio_test_pud_mappable(folio)) { + nr_failed++; + stats->nr_thp_failed++; + if (!try_split_folio(folio, split_folios, mode)) { + stats->nr_thp_split++; + stats->nr_split++; + continue; + } + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + continue; + } if (!thp_migration_supported() && is_thp) { nr_failed++; stats->nr_thp_failed++; diff --git a/mm/vmscan.c b/mm/vmscan.c index 619691aa43938..868514a770bf2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1348,6 +1348,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, enum ttu_flags flags = TTU_BATCH_FLUSH; bool was_swapbacked = folio_test_swapbacked(folio); + if (folio_test_pud_mappable(folio)) + flags |= TTU_SPLIT_HUGE_PUD; if (folio_test_pmd_mappable(folio)) flags |= TTU_SPLIT_HUGE_PMD; /* -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:22 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a test that allocates a PUD THP, forks a child process, and has the child write to the shared memory. This triggers the copy-on-write path which must split the PUD THP. The test verifies that both parent and child see correct data after the split. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/pud_thp_test.c | 44 +++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c index 7a1f0b0f81468..27a509cd477d5 100644 --- a/tools/testing/selftests/mm/pud_thp_test.c +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -181,4 +181,48 @@ TEST_F(pud_thp, read_write_access) ASSERT_EQ(errors, 0); } +/* + * Test: Fork and copy-on-write + * Verifies that COW correctly splits the PUD THP and isolates parent/child + */ +TEST_F(pud_thp, fork_cow) +{ + unsigned long *ptr = (unsigned long *)self->aligned; + unsigned char *bytes = (unsigned char *)self->aligned; + pid_t pid; + int status; + unsigned long split_after; + + /* Initialize memory with known pattern */ + memset(self->aligned, 0xCC, PUD_SIZE); + + pid = fork(); + ASSERT_GE(pid, 0); + + if (pid == 0) { + /* Child: write to trigger COW */ + ptr[0] = 0x12345678UL; + + /* Verify write succeeded and rest of memory unchanged */ + if (ptr[0] != 0x12345678UL) + _exit(1); + if (bytes[PAGE_SIZE] != 0xCC) + _exit(2); + + _exit(0); + } + + /* Parent: wait for child */ + waitpid(pid, &status, 0); + ASSERT_TRUE(WIFEXITED(status)); + ASSERT_EQ(WEXITSTATUS(status), 0); + + /* Verify parent memory unchanged (COW should have given child a copy) */ + ASSERT_EQ(bytes[0], 0xCC); + + split_after = read_vmstat("thp_split_pud"); + TH_LOG("Fork COW completed (thp_split_pud: %lu -> %lu)", + self->split_before, split_after); +} + TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:25 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a selftest for PUD-level THPs (1GB THPs) with test infrastructure and a basic allocation test. The test uses the kselftest harness FIXTURE/TEST_F framework. A shared fixture allocates a 2GB anonymous mapping and computes a PUD-aligned address within it. Helper functions read THP counters from /proc/vmstat and mTHP statistics from sysfs. The basic allocation test verifies the fundamental PUD THP allocation path by touching a PUD-aligned region and checking that the mTHP anon_fault_alloc counter increments, confirming a 1GB folio was allocated. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 161 ++++++++++++++++++++++ 2 files changed, 162 insertions(+) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eaf9312097f7b..ab79f1693941a 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -88,6 +88,7 @@ TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += pfnmap TEST_GEN_FILES += process_madv TEST_GEN_FILES += prctl_thp_disable +TEST_GEN_FILES += pud_thp_test TEST_GEN_FILES += thuge-gen TEST_GEN_FILES += transhuge-stress TEST_GEN_FILES += uffd-stress diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c new file mode 100644 index 0000000000000..6f0c02c6afd3a --- /dev/null +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test program for PUD-level Transparent Huge Pages (1GB anonymous THP) + * + * Prerequisites: + * - Kernel with PUD THP support (CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) + * - THP enabled: echo always > /sys/kernel/mm/transparent_hugepage/enabled + * - PUD THP enabled: echo always > /sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/enabled + */ + +#define _GNU_SOURCE +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <unistd.h> +#include <sys/mman.h> +#include <sys/wait.h> +#include <fcntl.h> +#include <errno.h> +#include <stdint.h> +#include <sys/syscall.h> + +#include "kselftest_harness.h" + +#define PUD_SIZE (1UL << 30) /* 1GB */ +#define PMD_SIZE (1UL << 21) /* 2MB */ +#define PAGE_SIZE (1UL << 12) /* 4KB */ + +#define TEST_REGION_SIZE (2 * PUD_SIZE) /* 2GB to ensure PUD alignment */ + +/* Get PUD-aligned address within a region */ +static inline void *pud_align(void *addr) +{ + return (void *)(((unsigned long)addr + PUD_SIZE - 1) & ~(PUD_SIZE - 1)); +} + +/* Read vmstat counter */ +static unsigned long read_vmstat(const char *name) +{ + FILE *fp; + char line[256]; + unsigned long value = 0; + + fp = fopen("/proc/vmstat", "r"); + if (!fp) + return 0; + + while (fgets(line, sizeof(line), fp)) { + if (strncmp(line, name, strlen(name)) == 0 && + line[strlen(name)] == ' ') { + sscanf(line + strlen(name), " %lu", &value); + break; + } + } + fclose(fp); + return value; +} + +/* Read mTHP stats for PUD order (1GB = 1048576kB) */ +static unsigned long read_mthp_stat(const char *stat_name) +{ + char path[256]; + char buf[64]; + int fd; + ssize_t ret; + unsigned long value = 0; + + snprintf(path, sizeof(path), + "/sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/stats/%s", + stat_name); + fd = open(path, O_RDONLY); + if (fd < 0) + return 0; + ret = read(fd, buf, sizeof(buf) - 1); + close(fd); + if (ret <= 0) + return 0; + buf[ret] = '\0'; + sscanf(buf, "%lu", &value); + return value; +} + +/* Check if PUD THP is enabled */ +static int pud_thp_enabled(void) +{ + char buf[64]; + int fd; + ssize_t ret; + + fd = open("/sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/enabled", O_RDONLY); + if (fd < 0) + return 0; + ret = read(fd, buf, sizeof(buf) - 1); + close(fd); + if (ret <= 0) + return 0; + buf[ret] = '\0'; + + /* Check if [always] or [madvise] is set */ + if (strstr(buf, "[always]") || strstr(buf, "[madvise]")) + return 1; + return 0; +} + +/* + * Main fixture for PUD THP tests + * Allocates a 2GB region and provides a PUD-aligned pointer within it + */ +FIXTURE(pud_thp) +{ + void *mem; /* Base mmap allocation */ + void *aligned; /* PUD-aligned pointer within mem */ + unsigned long mthp_alloc_before; + unsigned long split_before; +}; + +FIXTURE_SETUP(pud_thp) +{ + if (!pud_thp_enabled()) + SKIP(return, "PUD THP not enabled in sysfs"); + + self->mthp_alloc_before = read_mthp_stat("anon_fault_alloc"); + self->split_before = read_vmstat("thp_split_pud"); + + self->mem = mmap(NULL, TEST_REGION_SIZE, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + ASSERT_NE(self->mem, MAP_FAILED); + + self->aligned = pud_align(self->mem); +} + +FIXTURE_TEARDOWN(pud_thp) +{ + if (self->mem && self->mem != MAP_FAILED) + munmap(self->mem, TEST_REGION_SIZE); +} + +/* + * Test: Basic PUD THP allocation + * Verifies that touching a PUD-aligned region allocates a PUD THP + */ +TEST_F(pud_thp, basic_allocation) +{ + unsigned long mthp_alloc_after; + + /* Touch memory to trigger page fault and PUD THP allocation */ + memset(self->aligned, 0xAB, PUD_SIZE); + + mthp_alloc_after = read_mthp_stat("anon_fault_alloc"); + + /* + * If mTHP allocation counter increased, a PUD THP was allocated. + */ + if (mthp_alloc_after <= self->mthp_alloc_before) + SKIP(return, "PUD THP not allocated"); + + TH_LOG("PUD THP allocated (anon_fault_alloc: %lu -> %lu)", + self->mthp_alloc_before, mthp_alloc_after); +} + +TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:23 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a test that changes permissions on a portion of a PUD THP using mprotect. Since different parts now have different permissions, the PUD must be split. The test verifies correct behavior after the permission change. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/pud_thp_test.c | 26 +++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c index 8d4cb0e60f7f7..b59eb470adbba 100644 --- a/tools/testing/selftests/mm/pud_thp_test.c +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -256,4 +256,30 @@ TEST_F(pud_thp, partial_munmap) self->split_before, split_after); } +/* + * Test: mprotect triggers split + * Verifies that changing protection on part of a PUD THP splits it + */ +TEST_F(pud_thp, mprotect_split) +{ + volatile unsigned char *p = (unsigned char *)self->aligned; + unsigned long split_after; + int ret; + + /* Touch memory to allocate PUD THP */ + memset(self->aligned, 0xEE, PUD_SIZE); + + /* Change protection on a 2MB region - should trigger PUD split */ + ret = mprotect((char *)self->aligned + PMD_SIZE, PMD_SIZE, PROT_READ); + ASSERT_EQ(ret, 0); + + split_after = read_vmstat("thp_split_pud"); + + /* Verify memory still readable */ + ASSERT_EQ(*p, 0xEE); + + TH_LOG("mprotect split completed (thp_split_pud: %lu -> %lu)", + self->split_before, split_after); +} + TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:27 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a test that uses MADV_PAGEOUT to advise the kernel to page out the PUD THP memory. This exercises the reclaim path which must split the PUD THP before reclaiming the individual pages. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/pud_thp_test.c | 33 +++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c index b59eb470adbba..961fdc489d8a2 100644 --- a/tools/testing/selftests/mm/pud_thp_test.c +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -28,6 +28,10 @@ #define TEST_REGION_SIZE (2 * PUD_SIZE) /* 2GB to ensure PUD alignment */ +#ifndef MADV_PAGEOUT +#define MADV_PAGEOUT 21 +#endif + /* Get PUD-aligned address within a region */ static inline void *pud_align(void *addr) { @@ -282,4 +286,33 @@ TEST_F(pud_thp, mprotect_split) self->split_before, split_after); } +/* + * Test: Reclaim via MADV_PAGEOUT + * Verifies that reclaim path correctly handles PUD THPs + */ +TEST_F(pud_thp, reclaim_pageout) +{ + volatile unsigned char *p; + unsigned long split_after; + int ret; + + /* Touch memory to allocate PUD THP */ + memset(self->aligned, 0xAA, PUD_SIZE); + + /* Try to reclaim the pages */ + ret = madvise(self->aligned, PUD_SIZE, MADV_PAGEOUT); + if (ret < 0 && errno == EINVAL) + SKIP(return, "MADV_PAGEOUT not supported"); + ASSERT_EQ(ret, 0); + + split_after = read_vmstat("thp_split_pud"); + + /* Touch memory again to verify it's still accessible */ + p = (unsigned char *)self->aligned; + (void)*p; /* Read to bring pages back if swapped */ + + TH_LOG("Reclaim completed (thp_split_pud: %lu -> %lu)", + self->split_before, split_after); +} + TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:28 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a test that verifies data integrity across a 1GB PUD THP region by writing patterns at page boundaries and reading them back. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/pud_thp_test.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c index 6f0c02c6afd3a..7a1f0b0f81468 100644 --- a/tools/testing/selftests/mm/pud_thp_test.c +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -158,4 +158,27 @@ TEST_F(pud_thp, basic_allocation) self->mthp_alloc_before, mthp_alloc_after); } +/* + * Test: Read/write access patterns + * Verifies data integrity across the entire 1GB region + */ +TEST_F(pud_thp, read_write_access) +{ + unsigned long *ptr = (unsigned long *)self->aligned; + size_t i; + int errors = 0; + + /* Write pattern - sample every page to reduce test time */ + for (i = 0; i < PUD_SIZE / sizeof(unsigned long); i += PAGE_SIZE / sizeof(unsigned long)) + ptr[i] = i ^ 0xDEADBEEFUL; + + /* Verify pattern */ + for (i = 0; i < PUD_SIZE / sizeof(unsigned long); i += PAGE_SIZE / sizeof(unsigned long)) { + if (ptr[i] != (i ^ 0xDEADBEEFUL)) + errors++; + } + + ASSERT_EQ(errors, 0); +} + TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:24 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a test that allocates a PUD THP and unmaps a 2MB region from the middle. Since the PUD can no longer cover the entire region, it must be split. The test verifies that memory before and after the hole remains accessible with correct data. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/pud_thp_test.c | 31 +++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c index 27a509cd477d5..8d4cb0e60f7f7 100644 --- a/tools/testing/selftests/mm/pud_thp_test.c +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -225,4 +225,35 @@ TEST_F(pud_thp, fork_cow) self->split_before, split_after); } +/* + * Test: Partial munmap triggers split + * Verifies that unmapping part of a PUD THP splits it correctly + */ +TEST_F(pud_thp, partial_munmap) +{ + unsigned long *ptr = (unsigned long *)self->aligned; + unsigned long *after_hole; + unsigned long split_after; + int ret; + + /* Touch memory to allocate PUD THP */ + memset(self->aligned, 0xDD, PUD_SIZE); + + /* Unmap a 2MB region in the middle - should trigger PUD split */ + ret = munmap((char *)self->aligned + PUD_SIZE / 2, PMD_SIZE); + ASSERT_EQ(ret, 0); + + split_after = read_vmstat("thp_split_pud"); + + /* Verify memory before the hole is still accessible and correct */ + ASSERT_EQ(ptr[0], 0xDDDDDDDDDDDDDDDDUL); + + /* Verify memory after the hole is still accessible and correct */ + after_hole = (unsigned long *)((char *)self->aligned + PUD_SIZE / 2 + PMD_SIZE); + ASSERT_EQ(*after_hole, 0xDDDDDDDDDDDDDDDDUL); + + TH_LOG("Partial munmap completed (thp_split_pud: %lu -> %lu)", + self->split_before, split_after); +} + TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:26 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
Add a test that uses mbind() to change the NUMA memory policy, which triggers migration. The kernel must split PUD THPs before migration since there is no PUD-level migration entry support. The test verifies data integrity after the migration attempt. Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- tools/testing/selftests/mm/pud_thp_test.c | 42 +++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c index 961fdc489d8a2..7e227f29e69fb 100644 --- a/tools/testing/selftests/mm/pud_thp_test.c +++ b/tools/testing/selftests/mm/pud_thp_test.c @@ -32,6 +32,14 @@ #define MADV_PAGEOUT 21 #endif +#ifndef MPOL_BIND +#define MPOL_BIND 2 +#endif + +#ifndef MPOL_MF_MOVE +#define MPOL_MF_MOVE (1 << 1) +#endif + /* Get PUD-aligned address within a region */ static inline void *pud_align(void *addr) { @@ -315,4 +323,38 @@ TEST_F(pud_thp, reclaim_pageout) self->split_before, split_after); } +/* + * Test: Migration via mbind + * Verifies that migration path correctly handles PUD THPs by splitting + */ +TEST_F(pud_thp, migration_mbind) +{ + unsigned char *bytes = (unsigned char *)self->aligned; + unsigned long nodemask = 1UL; /* Node 0 */ + unsigned long split_after; + int ret; + + /* Touch memory to allocate PUD THP */ + memset(self->aligned, 0xBB, PUD_SIZE); + + /* Try to migrate by changing NUMA policy */ + ret = syscall(__NR_mbind, self->aligned, PUD_SIZE, MPOL_BIND, &nodemask, + sizeof(nodemask) * 8, MPOL_MF_MOVE); + /* + * mbind may fail with EINVAL (single node) or EIO (migration failed), + * which is acceptable - we just want to exercise the migration path. + */ + if (ret < 0 && errno != EINVAL && errno != EIO) + TH_LOG("mbind returned unexpected error: %s", strerror(errno)); + + split_after = read_vmstat("thp_split_pud"); + + /* Verify data integrity */ + ASSERT_EQ(bytes[0], 0xBB); + ASSERT_EQ(bytes[PUD_SIZE - 1], 0xBB); + + TH_LOG("Migration completed (thp_split_pud: %lu -> %lu)", + self->split_before, split_after); +} + TEST_HARNESS_MAIN -- 2.47.3
{ "author": "Usama Arif <usamaarif642@gmail.com>", "date": "Sun, 1 Feb 2026 16:50:29 -0800", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }
lkml
[RFC 00/12] mm: PUD (1GB) THP implementation
This is an RFC series to implement 1GB PUD-level THPs, allowing applications to benefit from reduced TLB pressure without requiring hugetlbfs. The patches are based on top of f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6). Motivation: Why 1GB THP over hugetlbfs? ======================================= While hugetlbfs provides 1GB huge pages today, it has significant limitations that make it unsuitable for many workloads: 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot or runtime, taking memory away. This requires capacity planning, administrative overhead, and makes workload orchastration much much more complex, especially colocating with workloads that don't use hugetlbfs. 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails rather than falling back to smaller pages. This makes it fragile under memory pressure. 4. No Splitting: hugetlbfs pages cannot be split when only partial access is needed, leading to memory waste and preventing partial reclaim. 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot be easily shared with regular memory pools. PUD THP solves these limitations by integrating 1GB pages into the existing THP infrastructure. Performance Results =================== Benchmark results of these patches on Intel Xeon Platinum 8321HC: Test: True Random Memory Access [1] test of 4GB memory region with pointer chasing workload (4M random pointer dereferences through memory): | Metric | PUD THP (1GB) | PMD THP (2MB) | Change | |-------------------|---------------|---------------|--------------| | Memory access | 88 ms | 134 ms | 34% faster | | Page fault time | 898 ms | 331 ms | 2.7x slower | Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)). For long-running workloads this will be a one-off cost, and the 34% improvement in access latency provides significant benefit. ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU bound workload running on a large number of ARM servers (256G). I enabled the 512M THP settings to always for a 100 servers in production (didn't really have high expectations :)). The average memory used for the workload increased from 217G to 233G. The amount of memory backed by 512M pages was 68G! The dTLB misses went down by 26% and the PID multiplier increased input by 5.9% (This is a very significant improvment in workload performance). A significant number of these THPs were faulted in at application start when were present across different VMAs. Ofcourse getting these 512M pages is easier on ARM due to bigger PAGE_SIZE and pageblock order. I am hoping that these patches for 1G THP can be used to provide similar benefits for x86. I expect workloads to fault them in at start time when there is plenty of free memory available. Previous attempt by Zi Yan ========================== Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been significant changes in kernel since then, including folio conversion, mTHP framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD code as reference for making 1G PUD THP work. I am hoping Zi can provide guidance on these patches! Major Design Decisions ====================== 1. No shared 1G zero page: The memory cost would be quite significant! 2. Page Table Pre-deposit Strategy PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE page tables (one for each potential PMD entry after split). We allocate a PMD page table and use its pmd_huge_pte list to store the deposited PTE tables. This ensures split operations don't fail due to page table allocation failures (at the cost of 2M per PUD THP) 3. Split to Base Pages When a PUD THP must be split (COW, partial unmap, mprotect), we split directly to base pages (262,144 PTEs). The ideal thing would be to split to 2M pages and then to 4K pages if needed. However, this would require significant rmap and mapcount tracking changes. 4. COW and fork handling via split Copy-on-write and fork for PUD THP triggers a split to base pages, then uses existing PTE-level COW infrastructure. Getting another 1G region is hard and could fail. If only a 4K is written, copying 1G is a waste. Probably this should only be done on CoW and not fork? 5. Migration via split Split PUD to PTEs and migrate individual pages. It is going to be difficult to find a 1G continguous memory to migrate to. Maybe its better to not allow migration of PUDs at all? I am more tempted to not allow migration, but have kept splitting in this RFC. Reviewers guide =============== Most of the code is written by adapting from PMD code. For e.g. the PUD page fault path is very similar to PMD. The difference is no shared zero page and the page table deposit strategy. I think the easiest way to review this series is to compare with PMD code. Test results ============ 1..7 # Starting 7 tests from 1 test cases. # RUN pud_thp.basic_allocation ... # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1) # OK pud_thp.basic_allocation ok 1 pud_thp.basic_allocation # RUN pud_thp.read_write_access ... # OK pud_thp.read_write_access ok 2 pud_thp.read_write_access # RUN pud_thp.fork_cow ... # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1) # OK pud_thp.fork_cow ok 3 pud_thp.fork_cow # RUN pud_thp.partial_munmap ... # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2) # OK pud_thp.partial_munmap ok 4 pud_thp.partial_munmap # RUN pud_thp.mprotect_split ... # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3) # OK pud_thp.mprotect_split ok 5 pud_thp.mprotect_split # RUN pud_thp.reclaim_pageout ... # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4) # OK pud_thp.reclaim_pageout ok 6 pud_thp.reclaim_pageout # RUN pud_thp.migration_mbind ... # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5) # OK pud_thp.migration_mbind ok 7 pud_thp.migration_mbind # PASSED: 7 / 7 tests passed. # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26 [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> Usama Arif (12): mm: add PUD THP ptdesc and rmap support mm/thp: add mTHP stats infrastructure for PUD THP mm: thp: add PUD THP allocation and fault handling mm: thp: implement PUD THP split to PTE level mm: thp: add reclaim and migration support for PUD THP selftests/mm: add PUD THP basic allocation test selftests/mm: add PUD THP read/write access test selftests/mm: add PUD THP fork COW test selftests/mm: add PUD THP partial munmap test selftests/mm: add PUD THP mprotect split test selftests/mm: add PUD THP reclaim test selftests/mm: add PUD THP migration test include/linux/huge_mm.h | 60 ++- include/linux/mm.h | 19 + include/linux/mm_types.h | 5 +- include/linux/pgtable.h | 8 + include/linux/rmap.h | 7 +- mm/huge_memory.c | 535 +++++++++++++++++++++- mm/internal.h | 3 + mm/memory.c | 8 +- mm/migrate.c | 17 + mm/page_vma_mapped.c | 35 ++ mm/pgtable-generic.c | 83 ++++ mm/rmap.c | 96 +++- mm/vmscan.c | 2 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++ 15 files changed, 1197 insertions(+), 42 deletions(-) create mode 100644 tools/testing/selftests/mm/pud_thp_test.c -- 2.47.3
On Sun, 2026-02-01 at 16:50 -0800, Usama Arif wrote: To address the obvious objection "but how could we possibly allocate 1GB huge pages while the workload is running?", I am planning to pick up the CMA balancing  patch series (thank you, Frank) and get that in an  upstream ready shape soon. https://lkml.org/2025/9/15/1735 That patch set looks like another case where no amount of internal testing will find every single corner case, and we'll probably just want to merge it upstream, deploy it experimentally, and aggressively deal with anything that might pop up. With CMA balancing, it would be possibly to just have half (or even more) of system memory for movable allocations only, which would make it possible to allocate 1GB huge pages dynamically. -- All Rights Reversed.
{ "author": "Rik van Riel <riel@surriel.com>", "date": "Sun, 01 Feb 2026 21:44:12 -0500", "thread_id": "3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com.mbox.gz" }