data_type
large_stringclasses
3 values
source
large_stringclasses
29 values
code
large_stringlengths
98
49.4M
filepath
large_stringlengths
5
161
message
large_stringclasses
234 values
commit
large_stringclasses
234 values
subject
large_stringclasses
418 values
critique
large_stringlengths
101
1.26M
metadata
dict
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
Add the FSP boot path for Hopper and Blackwell GPUs. These architectures use FSP with FMC firmware for Chain of Trust boot, rather than SEC2. boot() now dispatches to boot_via_sec2() or boot_via_fsp() based on architecture. The SEC2 path keeps its original command ordering. The FSP path sends SetSystemInfo/SetRegistry after GSP becomes active. The GSP sequencer only runs for SEC2-based architectures. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/gpu/nova-core/firmware/fsp.rs | 2 - drivers/gpu/nova-core/fsp.rs | 5 - drivers/gpu/nova-core/gsp/boot.rs | 190 +++++++++++++++++++------- 3 files changed, 144 insertions(+), 53 deletions(-) diff --git a/drivers/gpu/nova-core/firmware/fsp.rs b/drivers/gpu/nova-core/firmware/fsp.rs index bb35f363b998..0e72f1378ef0 100644 --- a/drivers/gpu/nova-core/firmware/fsp.rs +++ b/drivers/gpu/nova-core/firmware/fsp.rs @@ -13,7 +13,6 @@ gpu::Chipset, // }; -#[expect(dead_code)] pub(crate) struct FspFirmware { /// FMC firmware image data (only the "image" ELF section). pub(crate) fmc_image: DmaObject, @@ -22,7 +21,6 @@ pub(crate) struct FspFirmware { } impl FspFirmware { - #[expect(dead_code)] pub(crate) fn new( dev: &device::Device<device::Bound>, chipset: Chipset, diff --git a/drivers/gpu/nova-core/fsp.rs b/drivers/gpu/nova-core/fsp.rs index c66ad0a102a6..3749b5e3a677 100644 --- a/drivers/gpu/nova-core/fsp.rs +++ b/drivers/gpu/nova-core/fsp.rs @@ -238,7 +238,6 @@ pub(crate) struct FmcBootArgs<'a> { impl<'a> FmcBootArgs<'a> { /// Build FMC boot arguments, allocating the DMA-coherent boot parameter /// structure that FSP will read. - #[expect(dead_code)] #[allow(clippy::too_many_arguments)] pub(crate) fn new( dev: &device::Device<device::Bound>, @@ -287,7 +286,6 @@ pub(crate) fn new( /// DMA address of the FMC boot parameters, needed after boot for lockdown /// release polling. - #[expect(dead_code)] pub(crate) fn boot_params_dma_handle(&self) -> u64 { self.fmc_boot_params.dma_handle() } @@ -301,7 +299,6 @@ impl Fsp { /// /// Polls the thermal scratch register until FSP signals boot completion /// or timeout occurs. - #[expect(dead_code)] pub(crate) fn wait_secure_boot( dev: &device::Device<device::Bound>, bar: &crate::driver::Bar0, @@ -331,7 +328,6 @@ pub(crate) fn wait_secure_boot( /// /// Extracts real cryptographic signatures from FMC ELF32 firmware sections. /// Returns signatures in a heap-allocated structure to prevent stack overflow. - #[expect(dead_code)] pub(crate) fn extract_fmc_signatures( dev: &device::Device<device::Bound>, fmc_fw_data: &[u8], @@ -391,7 +387,6 @@ pub(crate) fn extract_fmc_signatures( /// /// Builds the COT message from the pre-configured [`FmcBootArgs`], sends it /// to FSP, and waits for the response. - #[expect(dead_code)] pub(crate) fn boot_fmc( dev: &device::Device<device::Bound>, bar: &crate::driver::Bar0, diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs index 0db2c58e0765..1fdcb72ce163 100644 --- a/drivers/gpu/nova-core/gsp/boot.rs +++ b/drivers/gpu/nova-core/gsp/boot.rs @@ -13,6 +13,7 @@ use crate::{ driver::Bar0, falcon::{ + fsp::Fsp as FspEngine, gsp::Gsp, sec2::Sec2, Falcon, @@ -24,6 +25,7 @@ BooterFirmware, BooterKind, // }, + fsp::FspFirmware, fwsec::{ FwsecCommand, FwsecFirmware, // @@ -31,9 +33,17 @@ gsp::GspFirmware, FIRMWARE_VERSION, // }, - gpu::Chipset, + fsp::{ + FmcBootArgs, + Fsp, // + }, + gpu::{ + Architecture, + Chipset, // + }, gsp::{ commands, + fw::LibosMemoryRegionInitArgument, sequencer::{ GspSequencer, GspSequencerParams, // @@ -188,8 +198,83 @@ fn run_booter( booter.run(dev, bar, sec2_falcon, wpr_meta) } + /// Boot GSP via SEC2 booter firmware (Turing/Ampere/Ada path). + /// + /// This path uses FWSEC-FRTS to set up WPR2, then boots GSP directly, + /// then uses SEC2 to run the booter firmware. + #[allow(clippy::too_many_arguments)] + fn boot_via_sec2( + dev: &device::Device<device::Bound>, + bar: &Bar0, + chipset: Chipset, + gsp_falcon: &Falcon<Gsp>, + sec2_falcon: &Falcon<Sec2>, + fb_layout: &FbLayout, + libos: &CoherentAllocation<LibosMemoryRegionInitArgument>, + wpr_meta: &CoherentAllocation<GspFwWprMeta>, + ) -> Result { + // Run FWSEC-FRTS to set up the WPR2 region + let bios = Vbios::new(dev, bar)?; + Self::run_fwsec_frts(dev, gsp_falcon, bar, &bios, fb_layout)?; + + // Reset and boot GSP before SEC2 + gsp_falcon.reset(bar)?; + let libos_handle = libos.dma_handle(); + let (mbox0, mbox1) = gsp_falcon.boot( + bar, + Some(libos_handle as u32), + Some((libos_handle >> 32) as u32), + )?; + dev_dbg!(dev, "GSP MBOX0: {:#x}, MBOX1: {:#x}\n", mbox0, mbox1); + dev_dbg!( + dev, + "Using SEC2 to load and run the booter_load firmware...\n" + ); + + // Run booter via SEC2 + Self::run_booter(dev, bar, chipset, sec2_falcon, wpr_meta) + } + + /// Boot GSP via FSP Chain of Trust (Hopper/Blackwell+ path). + /// + /// This path uses FSP to establish a chain of trust and boot GSP-FMC. FSP handles + /// the GSP boot internally - no manual GSP reset/boot is needed. + fn boot_via_fsp( + dev: &device::Device<device::Bound>, + bar: &Bar0, + chipset: Chipset, + gsp_falcon: &Falcon<Gsp>, + wpr_meta: &CoherentAllocation<GspFwWprMeta>, + libos: &CoherentAllocation<LibosMemoryRegionInitArgument>, + ) -> Result { + let fsp_falcon = Falcon::<FspEngine>::new(dev, chipset)?; + + Fsp::wait_secure_boot(dev, bar, chipset.arch())?; + + let fsp_fw = FspFirmware::new(dev, chipset, FIRMWARE_VERSION)?; + + let signatures = Fsp::extract_fmc_signatures(dev, &fsp_fw.fmc_full)?; + + let args = FmcBootArgs::new( + dev, + chipset, + &fsp_fw.fmc_image, + wpr_meta.dma_handle(), + core::mem::size_of::<GspFwWprMeta>() as u32, + libos.dma_handle(), + false, + &signatures, + )?; + + Fsp::boot_fmc(dev, bar, &fsp_falcon, &args)?; + + let fmc_boot_params_addr = args.boot_params_dma_handle(); + Self::wait_for_gsp_lockdown_release(dev, bar, gsp_falcon, fmc_boot_params_addr)?; + + Ok(()) + } + /// Wait for GSP lockdown to be released after FSP Chain of Trust. - #[expect(dead_code)] fn wait_for_gsp_lockdown_release( dev: &device::Device<device::Bound>, bar: &Bar0, @@ -233,45 +318,49 @@ pub(crate) fn boot( sec2_falcon: &Falcon<Sec2>, ) -> Result { let dev = pdev.as_ref(); - - let bios = Vbios::new(dev, bar)?; + let uses_sec2 = matches!( + chipset.arch(), + Architecture::Turing | Architecture::Ampere | Architecture::Ada + ); let gsp_fw = KBox::pin_init(GspFirmware::new(dev, chipset, FIRMWARE_VERSION), GFP_KERNEL)?; let fb_layout = FbLayout::new(chipset, bar, &gsp_fw)?; dev_dbg!(dev, "{:#x?}\n", fb_layout); - Self::run_fwsec_frts(dev, gsp_falcon, bar, &bios, &fb_layout)?; - let wpr_meta = CoherentAllocation::<GspFwWprMeta>::alloc_coherent(dev, 1, GFP_KERNEL | __GFP_ZERO)?; dma_write!(wpr_meta[0] = GspFwWprMeta::new(&gsp_fw, &fb_layout))?; - self.cmdq - .send_command(bar, commands::SetSystemInfo::new(pdev, chipset))?; - self.cmdq.send_command(bar, commands::SetRegistry::new())?; - - gsp_falcon.reset(bar)?; - let libos_handle = self.libos.dma_handle(); - let (mbox0, mbox1) = gsp_falcon.boot( - bar, - Some(libos_handle as u32), - Some((libos_handle >> 32) as u32), - )?; - dev_dbg!( - pdev, - "GSP MBOX0: {:#x}, MBOX1: {:#x}\n", - mbox0, - mbox1 - ); - - dev_dbg!( - pdev, - "Using SEC2 to load and run the booter_load firmware...\n" - ); + // Architecture-specific boot path + if uses_sec2 { + // SEC2 path: send commands before GSP reset/boot (original order). + self.cmdq + .send_command(bar, commands::SetSystemInfo::new(pdev, chipset))?; + self.cmdq.send_command(bar, commands::SetRegistry::new())?; - Self::run_booter(dev, bar, chipset, sec2_falcon, &wpr_meta)?; + Self::boot_via_sec2( + dev, + bar, + chipset, + gsp_falcon, + sec2_falcon, + &fb_layout, + &self.libos, + &wpr_meta, + )?; + } else { + Self::boot_via_fsp( + dev, + bar, + chipset, + gsp_falcon, + &wpr_meta, + &self.libos, + )?; + } + // Common post-boot initialization gsp_falcon.write_os_version(bar, gsp_fw.bootloader.app_version); // Poll for RISC-V to become active before running sequencer @@ -282,22 +371,31 @@ pub(crate) fn boot( Delta::from_secs(5), )?; - dev_dbg!( - pdev, - "RISC-V active? {}\n", - gsp_falcon.is_riscv_active(bar), - ); + dev_dbg!(dev, "RISC-V active? {}\n", gsp_falcon.is_riscv_active(bar)); - // Create and run the GSP sequencer. - let seq_params = GspSequencerParams { - bootloader_app_version: gsp_fw.bootloader.app_version, - libos_dma_handle: libos_handle, - gsp_falcon, - sec2_falcon, - dev: pdev.as_ref().into(), - bar, - }; - GspSequencer::run(&mut self.cmdq, seq_params)?; + // For FSP path, send commands after GSP becomes active. + if matches!( + chipset.arch(), + Architecture::Hopper | Architecture::Blackwell + ) { + self.cmdq + .send_command(bar, commands::SetSystemInfo::new(pdev, chipset))?; + self.cmdq.send_command(bar, commands::SetRegistry::new())?; + } + + // SEC2-based architectures need to run the GSP sequencer + if uses_sec2 { + let libos_handle = self.libos.dma_handle(); + let seq_params = GspSequencerParams { + bootloader_app_version: gsp_fw.bootloader.app_version, + libos_dma_handle: libos_handle, + gsp_falcon, + sec2_falcon, + dev: dev.into(), + bar, + }; + GspSequencer::run(&mut self.cmdq, seq_params)?; + } // Wait until GSP is fully initialized. commands::wait_gsp_init_done(&mut self.cmdq)?; @@ -305,8 +403,8 @@ pub(crate) fn boot( // Obtain and display basic GPU information. let info = commands::get_gsp_info(&mut self.cmdq, bar)?; match info.gpu_name() { - Ok(name) => dev_info!(pdev, "GPU name: {}\n", name), - Err(e) => dev_warn!(pdev, "GPU name unavailable: {:?}\n", e), + Ok(name) => dev_info!(dev, "GPU name: {}\n", name), + Err(e) => dev_warn!(dev, "GPU name unavailable: {:?}\n", e), } Ok(()) -- 2.53.0
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:50 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
Hopper, Blackwell and later GPUs require a larger heap for WPR2. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/gpu/nova-core/fb.rs | 2 +- drivers/gpu/nova-core/gsp/fw.rs | 74 ++++++++++++++++++++++++--------- 2 files changed, 55 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/nova-core/fb.rs b/drivers/gpu/nova-core/fb.rs index 8b3ba9c9f464..08e6dd815352 100644 --- a/drivers/gpu/nova-core/fb.rs +++ b/drivers/gpu/nova-core/fb.rs @@ -247,7 +247,7 @@ pub(crate) fn new(chipset: Chipset, bar: &Bar0, gsp_fw: &GspFirmware) -> Result< let wpr2_heap = { const WPR2_HEAP_DOWN_ALIGN: Alignment = Alignment::new::<SZ_1M>(); let wpr2_heap_size = - gsp::LibosParams::from_chipset(chipset).wpr_heap_size(chipset, fb.end); + gsp::LibosParams::from_chipset(chipset).wpr_heap_size(chipset, fb.end)?; let wpr2_heap_addr = (elf.start - wpr2_heap_size).align_down(WPR2_HEAP_DOWN_ALIGN); FbRange(wpr2_heap_addr..(elf.start).align_down(WPR2_HEAP_DOWN_ALIGN)) diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs index 086153edfa86..7fa9d3b1a592 100644 --- a/drivers/gpu/nova-core/gsp/fw.rs +++ b/drivers/gpu/nova-core/gsp/fw.rs @@ -49,32 +49,52 @@ enum GspFwHeapParams {} /// Minimum required alignment for the GSP heap. const GSP_HEAP_ALIGNMENT: Alignment = Alignment::new::<{ 1 << 20 }>(); +// These constants override the generated bindings for architecture-specific heap sizing. +// See Open RM: kgspCalculateGspFwHeapSize and related functions. +// +// 14MB for Hopper/Blackwell+. +const GSP_FW_HEAP_PARAM_BASE_RM_SIZE_GH100: u64 = 14 * num::usize_as_u64(SZ_1M); +// 142MB client alloc for ~188MB total. +const GSP_FW_HEAP_PARAM_CLIENT_ALLOC_SIZE_GH100: u64 = 142 * num::usize_as_u64(SZ_1M); +// Hopper/Blackwell+ minimum heap size: 170MB (88 + 12 + 70). +// See Open RM: GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MIN_MB for the base 88MB, +// plus Hopper+ additions in kgspCalculateGspFwHeapSize_GH100. +const GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MIN_MB_HOPPER: u64 = 170; + impl GspFwHeapParams { /// Returns the amount of GSP-RM heap memory used during GSP-RM boot and initialization (up to /// and including the first client subdevice allocation). - fn base_rm_size(_chipset: Chipset) -> u64 { - // TODO: this needs to be updated to return the correct value for Hopper+ once support for - // them is added: - // u64::from(bindings::GSP_FW_HEAP_PARAM_BASE_RM_SIZE_GH100) - u64::from(bindings::GSP_FW_HEAP_PARAM_BASE_RM_SIZE_TU10X) + fn base_rm_size(chipset: Chipset) -> u64 { + use crate::gpu::Architecture; + match chipset.arch() { + Architecture::Hopper | Architecture::Blackwell => { + GSP_FW_HEAP_PARAM_BASE_RM_SIZE_GH100 + } + _ => u64::from(bindings::GSP_FW_HEAP_PARAM_BASE_RM_SIZE_TU10X), + } } /// Returns the amount of heap memory required to support a single channel allocation. - fn client_alloc_size() -> u64 { - u64::from(bindings::GSP_FW_HEAP_PARAM_CLIENT_ALLOC_SIZE) - .align_up(GSP_HEAP_ALIGNMENT) - .unwrap_or(u64::MAX) + fn client_alloc_size(chipset: Chipset) -> Result<u64> { + use crate::gpu::Architecture; + let size = match chipset.arch() { + Architecture::Hopper | Architecture::Blackwell => { + GSP_FW_HEAP_PARAM_CLIENT_ALLOC_SIZE_GH100 + } + _ => u64::from(bindings::GSP_FW_HEAP_PARAM_CLIENT_ALLOC_SIZE), + }; + size.align_up(GSP_HEAP_ALIGNMENT).ok_or(EINVAL) } /// Returns the amount of memory to reserve for management purposes for a framebuffer of size /// `fb_size`. - fn management_overhead(fb_size: u64) -> u64 { + fn management_overhead(fb_size: u64) -> Result<u64> { let fb_size_gb = fb_size.div_ceil(u64::from_safe_cast(kernel::sizes::SZ_1G)); u64::from(bindings::GSP_FW_HEAP_PARAM_SIZE_PER_GB_FB) .saturating_mul(fb_size_gb) .align_up(GSP_HEAP_ALIGNMENT) - .unwrap_or(u64::MAX) + .ok_or(EINVAL) } } @@ -106,29 +126,43 @@ impl LibosParams { * num::usize_as_u64(SZ_1M), }; + /// Hopper/Blackwell+ GPUs need a larger minimum heap size than the bindings specify. + /// The r570 bindings set LIBOS3_BAREMETAL_MIN_MB to 88MB, but Hopper/Blackwell+ actually + /// requires 170MB (88 + 12 + 70). + const LIBOS_HOPPER: LibosParams = LibosParams { + carveout_size: num::u32_as_u64(bindings::GSP_FW_HEAP_PARAM_OS_SIZE_LIBOS3_BAREMETAL), + allowed_heap_size: GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MIN_MB_HOPPER + * num::usize_as_u64(SZ_1M) + ..num::u32_as_u64(bindings::GSP_FW_HEAP_SIZE_OVERRIDE_LIBOS3_BAREMETAL_MAX_MB) + * num::usize_as_u64(SZ_1M), + }; + /// Returns the libos parameters corresponding to `chipset`. pub(crate) fn from_chipset(chipset: Chipset) -> &'static LibosParams { - if chipset < Chipset::GA102 { - &Self::LIBOS2 - } else { - &Self::LIBOS3 + use crate::gpu::Architecture; + match chipset.arch() { + Architecture::Turing => &Self::LIBOS2, + Architecture::Ampere if chipset == Chipset::GA100 => &Self::LIBOS2, + Architecture::Ampere | Architecture::Ada => &Self::LIBOS3, + Architecture::Hopper | Architecture::Blackwell => &Self::LIBOS_HOPPER, } } /// Returns the amount of memory (in bytes) to allocate for the WPR heap for a framebuffer size /// of `fb_size` (in bytes) for `chipset`. - pub(crate) fn wpr_heap_size(&self, chipset: Chipset, fb_size: u64) -> u64 { + pub(crate) fn wpr_heap_size(&self, chipset: Chipset, fb_size: u64) -> Result<u64> { // The WPR heap will contain the following: // LIBOS carveout, - self.carveout_size + Ok(self + .carveout_size // RM boot working memory, .saturating_add(GspFwHeapParams::base_rm_size(chipset)) // One RM client, - .saturating_add(GspFwHeapParams::client_alloc_size()) + .saturating_add(GspFwHeapParams::client_alloc_size(chipset)?) // Overhead for memory management. - .saturating_add(GspFwHeapParams::management_overhead(fb_size)) + .saturating_add(GspFwHeapParams::management_overhead(fb_size)?) // Clamp to the supported heap sizes. - .clamp(self.allowed_heap_size.start, self.allowed_heap_size.end - 1) + .clamp(self.allowed_heap_size.start, self.allowed_heap_size.end - 1)) } } -- 2.53.0
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:46 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Hopper and Blackwell, FSP boots GSP with hardware lockdown enabled. After FSP Chain of Trust completes, the driver must poll for lockdown release before proceeding with GSP initialization. Add the register bit and helper functions needed for this polling. Cc: Gary Guo <gary@garyguo.net> Cc: Timur Tabi <ttabi@nvidia.com> Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/gpu/nova-core/gsp/boot.rs | 80 ++++++++++++++++++++++++++++++- drivers/gpu/nova-core/regs.rs | 1 + 2 files changed, 80 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs index 7b177756d16d..5f3207bf7797 100644 --- a/drivers/gpu/nova-core/gsp/boot.rs +++ b/drivers/gpu/nova-core/gsp/boot.rs @@ -15,7 +15,8 @@ falcon::{ gsp::Gsp, sec2::Sec2, - Falcon, // + Falcon, + FalconEngine, // }, fb::FbLayout, firmware::{ @@ -43,6 +44,54 @@ vbios::Vbios, }; +/// GSP lockdown pattern written by firmware to mbox0 while RISC-V branch privilege +/// lockdown is active. The low byte varies, the upper 24 bits are fixed. +const GSP_LOCKDOWN_PATTERN: u32 = 0xbadf4100; +const GSP_LOCKDOWN_MASK: u32 = 0xffffff00; + +/// GSP falcon mailbox state, used to track lockdown release status. +struct GspMbox { + mbox0: u32, + mbox1: u32, +} + +impl GspMbox { + /// Read both mailboxes from the GSP falcon. + fn read(gsp_falcon: &Falcon<Gsp>, bar: &Bar0) -> Self { + Self { + mbox0: gsp_falcon.read_mailbox0(bar), + mbox1: gsp_falcon.read_mailbox1(bar), + } + } + + /// Returns true if the lockdown pattern is present in mbox0. + fn is_locked_down(&self) -> bool { + self.mbox0 != 0 && (self.mbox0 & GSP_LOCKDOWN_MASK) == GSP_LOCKDOWN_PATTERN + } + + /// Combines mailbox0 and mailbox1 into a 64-bit address. + fn combined_addr(&self) -> u64 { + (u64::from(self.mbox1) << 32) | u64::from(self.mbox0) + } + + /// Returns true if GSP lockdown has been released. + /// + /// Checks the lockdown pattern, validates the boot params address, + /// and verifies the HWCFG2 lockdown bit is clear. + fn lockdown_released(&self, bar: &Bar0, fmc_boot_params_addr: u64) -> bool { + if self.is_locked_down() { + return false; + } + + if self.mbox0 != 0 && self.combined_addr() != fmc_boot_params_addr { + return true; + } + + let hwcfg2 = regs::NV_PFALCON_FALCON_HWCFG2::read(bar, &crate::falcon::gsp::Gsp::ID); + !hwcfg2.riscv_br_priv_lockdown() + } +} + impl super::Gsp { /// Helper function to load and run the FWSEC-FRTS firmware and confirm that it has properly /// created the WPR2 region. @@ -139,6 +188,35 @@ fn run_booter( booter.run(dev, bar, sec2_falcon, wpr_meta) } + /// Wait for GSP lockdown to be released after FSP Chain of Trust. + #[expect(dead_code)] + fn wait_for_gsp_lockdown_release( + dev: &device::Device<device::Bound>, + bar: &Bar0, + gsp_falcon: &Falcon<Gsp>, + fmc_boot_params_addr: u64, + ) -> Result { + dev_dbg!(dev, "Waiting for GSP lockdown release\n"); + + let mbox = read_poll_timeout( + || Ok(GspMbox::read(gsp_falcon, bar)), + |mbox| mbox.lockdown_released(bar, fmc_boot_params_addr), + Delta::from_millis(10), + Delta::from_millis(4000), + ) + .inspect_err(|_| { + dev_err!(dev, "GSP lockdown release timeout\n"); + })?; + + if mbox.mbox0 != 0 { + dev_err!(dev, "GSP-FMC boot failed (mbox: {:#x})\n", mbox.mbox0); + return Err(EIO); + } + + dev_dbg!(dev, "GSP lockdown released\n"); + Ok(()) + } + /// Attempt to boot the GSP. /// /// This is a GPU-dependent and complex procedure that involves loading firmware files from diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs index 91911f9b32ca..8e4922399569 100644 --- a/drivers/gpu/nova-core/regs.rs +++ b/drivers/gpu/nova-core/regs.rs @@ -321,6 +321,7 @@ pub(crate) fn vga_workspace_addr(self) -> Option<u64> { register!(NV_PFALCON_FALCON_HWCFG2 @ PFalconBase[0x000000f4] { 10:10 riscv as bool; 12:12 mem_scrubbing as bool, "Set to 0 after memory scrubbing is completed"; + 13:13 riscv_br_priv_lockdown as bool, "RISC-V branch privilege lockdown bit"; 31:31 reset_ready as bool, "Signal indicating that reset is completed (GA102+)"; }); -- 2.53.0
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:48 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
Add dedicated FB HALs for Hopper (GH100) and Blackwell (GB100) with architecture-specific non-WPR heap sizes. Hopper uses 2 MiB, Blackwell uses 2 MiB + 128 KiB. These are needed for the larger reserved memory regions that Hopper/Blackwell GPUs require. Also adds the non_wpr_heap_size() method to the FbHal trait, and the total_reserved_size field to FbLayout. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/gpu/nova-core/fb.rs | 16 ++++++++--- drivers/gpu/nova-core/fb/hal.rs | 16 ++++++++--- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 38 +++++++++++++++++++++++++++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 +++++++++++++++++++++++++++ 5 files changed, 102 insertions(+), 8 deletions(-) create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs diff --git a/drivers/gpu/nova-core/fb.rs b/drivers/gpu/nova-core/fb.rs index 0e3519e5ccc0..8b3ba9c9f464 100644 --- a/drivers/gpu/nova-core/fb.rs +++ b/drivers/gpu/nova-core/fb.rs @@ -31,7 +31,7 @@ regs, }; -mod hal; +pub(crate) mod hal; /// Type holding the sysmem flush memory page, a page of memory to be written into the /// `NV_PFB_NISO_FLUSH_SYSMEM_ADDR*` registers and used to maintain memory coherency. @@ -99,6 +99,15 @@ pub(crate) fn unregister(&self, bar: &Bar0) { } } +/// Calculate non-WPR heap size based on chipset architecture. +/// This matches the logic used in FSP for consistency. +pub(crate) fn calc_non_wpr_heap_size(chipset: Chipset) -> u64 { + hal::fb_hal(chipset) + .non_wpr_heap_size() + .map(u64::from) + .unwrap_or(usize_as_u64(SZ_1M)) +} + pub(crate) struct FbRange(Range<u64>); impl FbRange { @@ -253,9 +262,8 @@ pub(crate) fn new(chipset: Chipset, bar: &Bar0, gsp_fw: &GspFirmware) -> Result< }; let heap = { - const HEAP_SIZE: u64 = usize_as_u64(SZ_1M); - - FbRange(wpr2.start - HEAP_SIZE..wpr2.start) + let heap_size = calc_non_wpr_heap_size(chipset); + FbRange(wpr2.start - heap_size..wpr2.start) }; Ok(Self { diff --git a/drivers/gpu/nova-core/fb/hal.rs b/drivers/gpu/nova-core/fb/hal.rs index d33ca0f96417..ebd12247f771 100644 --- a/drivers/gpu/nova-core/fb/hal.rs +++ b/drivers/gpu/nova-core/fb/hal.rs @@ -12,6 +12,8 @@ mod ga100; mod ga102; +mod gb100; +mod gh100; mod tu102; pub(crate) trait FbHal { @@ -28,14 +30,22 @@ pub(crate) trait FbHal { /// Returns the VRAM size, in bytes. fn vidmem_size(&self, bar: &Bar0) -> u64; + + /// Returns the non-WPR heap size for GPUs that need large reserved memory. + /// + /// Returns `None` for GPUs that don't need extra reserved memory. + fn non_wpr_heap_size(&self) -> Option<u32> { + None + } } /// Returns the HAL corresponding to `chipset`. -pub(super) fn fb_hal(chipset: Chipset) -> &'static dyn FbHal { +pub(crate) fn fb_hal(chipset: Chipset) -> &'static dyn FbHal { match chipset.arch() { Architecture::Turing => tu102::TU102_HAL, Architecture::Ampere if chipset == Chipset::GA100 => ga100::GA100_HAL, - Architecture::Ampere => ga102::GA102_HAL, - Architecture::Ada | Architecture::Hopper | Architecture::Blackwell => ga102::GA102_HAL, + Architecture::Ampere | Architecture::Ada => ga102::GA102_HAL, + Architecture::Hopper => gh100::GH100_HAL, + Architecture::Blackwell => gb100::GB100_HAL, } } diff --git a/drivers/gpu/nova-core/fb/hal/ga102.rs b/drivers/gpu/nova-core/fb/hal/ga102.rs index 734605905031..f8d8f01e3c5d 100644 --- a/drivers/gpu/nova-core/fb/hal/ga102.rs +++ b/drivers/gpu/nova-core/fb/hal/ga102.rs @@ -8,7 +8,7 @@ regs, // }; -fn vidmem_size_ga102(bar: &Bar0) -> u64 { +pub(super) fn vidmem_size_ga102(bar: &Bar0) -> u64 { regs::NV_USABLE_FB_SIZE_IN_MB::read(bar).usable_fb_size() } diff --git a/drivers/gpu/nova-core/fb/hal/gb100.rs b/drivers/gpu/nova-core/fb/hal/gb100.rs new file mode 100644 index 000000000000..bead99a6ca76 --- /dev/null +++ b/drivers/gpu/nova-core/fb/hal/gb100.rs @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 + +use kernel::prelude::*; + +use crate::{ + driver::Bar0, + fb::hal::FbHal, // +}; + +struct Gb100; + +impl FbHal for Gb100 { + fn read_sysmem_flush_page(&self, bar: &Bar0) -> u64 { + super::ga100::read_sysmem_flush_page_ga100(bar) + } + + fn write_sysmem_flush_page(&self, bar: &Bar0, addr: u64) -> Result { + super::ga100::write_sysmem_flush_page_ga100(bar, addr); + + Ok(()) + } + + fn supports_display(&self, bar: &Bar0) -> bool { + super::ga100::display_enabled_ga100(bar) + } + + fn vidmem_size(&self, bar: &Bar0) -> u64 { + super::ga102::vidmem_size_ga102(bar) + } + + fn non_wpr_heap_size(&self) -> Option<u32> { + // 2 MiB + 128 KiB non-WPR heap for Blackwell (see Open RM: kgspCalculateFbLayout_GB100). + Some(0x220000) + } +} + +const GB100: Gb100 = Gb100; +pub(super) const GB100_HAL: &dyn FbHal = &GB100; diff --git a/drivers/gpu/nova-core/fb/hal/gh100.rs b/drivers/gpu/nova-core/fb/hal/gh100.rs new file mode 100644 index 000000000000..32d7414e6243 --- /dev/null +++ b/drivers/gpu/nova-core/fb/hal/gh100.rs @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 + +use kernel::prelude::*; + +use crate::{ + driver::Bar0, + fb::hal::FbHal, // +}; + +struct Gh100; + +impl FbHal for Gh100 { + fn read_sysmem_flush_page(&self, bar: &Bar0) -> u64 { + super::ga100::read_sysmem_flush_page_ga100(bar) + } + + fn write_sysmem_flush_page(&self, bar: &Bar0, addr: u64) -> Result { + super::ga100::write_sysmem_flush_page_ga100(bar, addr); + + Ok(()) + } + + fn supports_display(&self, bar: &Bar0) -> bool { + super::ga100::display_enabled_ga100(bar) + } + + fn vidmem_size(&self, bar: &Bar0) -> u64 { + super::ga102::vidmem_size_ga102(bar) + } + + fn non_wpr_heap_size(&self) -> Option<u32> { + // 2 MiB non-WPR heap for Hopper (see Open RM: kgspCalculateFbLayout_GH100). + Some(0x200000) + } +} + +const GH100: Gh100 = Gh100; +pub(super) const GH100_HAL: &dyn FbHal = &GH100; -- 2.53.0
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:43 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Sat, Feb 21, 2026 at 3:11 AM John Hubbard <jhubbard@nvidia.com> wrote: Link: https://lore.kernel.org/rust-for-linux/20260206171253.2704684-2-gary@kernel.org/ [1] Ah, I thought you wanted to put this in `drivers/gpu/nova-core/num.rs` like in the previous version. If it is here instead, then you shouldn't need the `rust_allowed_features` change anymore, because we already enable `inline_const` in the `kernel` crate. Having said that, if you do end up needing it elsewhere, then please add the other line added by Gary's patch, i.e.: +# - Stable since Rust 1.79.0: `feature(inline_const)`. Thanks! Cheers, Miguel
{ "author": "Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>", "date": "Sat, 21 Feb 2026 21:50:38 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2026-02-21 02:09, John Hubbard wrote: This is wrong. Either this function is always used in const context, in which case you take `ALIGN` as normal function parameter and use `build_assert` and `build_error`, or this function can be called from runtime and you shouldn't have a panic call here. Best, Gary
{ "author": "Gary Guo <gary@garyguo.net>", "date": "Sun, 22 Feb 2026 07:46:47 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2/21/26 12:50 PM, Miguel Ojeda wrote: Works for me. I was anticipating that people wanted it in rust/ but I'm perfectly happy to keep it local to nova-core. I see. thanks, -- John Hubbard
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Sun, 22 Feb 2026 11:03:08 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2/21/26 11:46 PM, Gary Guo wrote: ... I will have another go at this, and put it in nova-core as per Miguel's comment as well. Thanks for catching this, Gary! thanks, -- John Hubbard
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Sun, 22 Feb 2026 11:04:53 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Sun, Feb 22, 2026 at 8:03 PM John Hubbard <jhubbard@nvidia.com> wrote: Sorry, I didn't mean you necessarily need to move it -- I only meant to point out that if you do, then you don't need the other changes. Cheers, Miguel
{ "author": "Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>", "date": "Sun, 22 Feb 2026 20:08:43 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Mon Feb 23, 2026 at 4:08 AM JST, Miguel Ojeda wrote: FWIW I think it makes more sense to keep it in `kernel` - even though Nova is the only user for now, this is a useful addition in general.
{ "author": "\"Alexandre Courbot\" <acourbot@nvidia.com>", "date": "Mon, 23 Feb 2026 12:36:12 +0900", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Sun Feb 22, 2026 at 8:04 PM CET, John Hubbard wrote: I think the most common case is that ALIGN is const, but value is not. What about keeping the function as is (with the panic() replaced with a Result) and also add #[inline(always)] pub const fn const_expect<T: Copy>(opt: Result<T>, &'static str) -> T { match opt { Ok(v) => v, Err(_) => panic!(""), } } for when it is entirely called from const context, e.g. pub(crate) const PMU_RESERVED_SIZE: u32 = const_expect(const_align_up::<SZ_128K>(SZ_8M + SZ_16M + SZ_4K), "..."); I think Miguel didn't mean to say it should not be in this file. I think the current place makes sense, let's keep it there.
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Mon, 23 Feb 2026 12:07:14 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Fri, Feb 20, 2026 at 06:09:35PM -0800, John Hubbard wrote: Note that Rust Binder's ptr_align could use this if you want another user. Alice
{ "author": "Alice Ryhl <aliceryhl@google.com>", "date": "Mon, 23 Feb 2026 11:23:44 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2026-02-23 11:07, Danilo Krummrich wrote: We already have `Alignable::align_up` for non-const cases, so this would only be used in const context and I don't see the need of having explicit const_expect? Best, Gary
{ "author": "Gary Guo <gary@garyguo.net>", "date": "Mon, 23 Feb 2026 14:16:55 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Mon Feb 23, 2026 at 3:16 PM CET, Gary Guo wrote: Fair enough -- unfortunate we can't call this from const context.
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Mon, 23 Feb 2026 15:20:40 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2/21/26 3:09 AM, John Hubbard wrote: Applied to drm-rust-next, thanks! [ Use LKMM atomics; inline and slightly reword TODO comment. - Danilo ]
{ "author": "Danilo Krummrich <dakr@kernel.org>", "date": "Tue, 24 Feb 2026 15:47:53 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Tue Feb 24, 2026 at 2:47 PM GMT, Danilo Krummrich wrote: Danilo, can you drop this patch from drm-rust-next? The patch that is supposed to be queued is https://lore.kernel.org/rust-for-linux/20260205221758.219192-1-jhubbard@nvidia.com/#t, which does correctly use LKMM atomics and add comments about possible use of XArray. In fact, I am not sure why this patch carries my R-b. Best, Gary
{ "author": "\"Gary Guo\" <gary@garyguo.net>", "date": "Fri, 27 Feb 2026 15:37:31 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a lot of issues found during review, and then I found more while doing the fixes. Patch 1 is going to be merged separately, but is included here in order to allow people to apply the series. Patch 2 is going to come from Gary Guo, not here, but is included for the same reason. The last two patches, 37 and 38, do not need to be part of this series, but are best applied *after* the series, in order to catch all the cases. There are a also a few rust/ patches that might need/want to get merged separately. It's been tested on Ampere and Blackwell, one each: NovaCore 0000:e1:00.0: GPU name: NVIDIA RTX A4000 NovaCore 0000:01:00.0: GPU name: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Changes in v5 (in highly condensed and summarized form): * Rebased onto linux.git master. * Split MCTP protocol into its own module and file. * Many Rust-based improvements: more use of types, especially. Also used Result and Option more. * Lots of cleanup of comments and print output and error handling. * Added const_align_up() to rust/ and used it in nova-core. This required enabling a Rust feature: inline_const, as recommended by Miguel Ojeda. * Refactoring various things, such as Gpu::new() to own Spec creation, and several more such things. * Fixed three Delta::ZERO busy-polls (patches 21, 24, 31) to use non-zero sleep intervals (after just realizing that it was a bad choice to have zero in there). * Reduced GH100/GB100 HAL duplication. Made FSP_PKEY_SIZE/FSP_SIG_SIZE consistent across patches. Replaced fragile architecture checks with chipset.arch(). Renamed LIBOS_BLACKWELL. * Narrowed the scope of some of the #![expect(dead_code)] cases, although that really only matters within the series, not once it is fully applied. John Hubbard (38): gpu: nova-core: fix aux device registration for multi-GPU systems gpu: nova-core: pass pdev directly to dev_* logging macros gpu: nova-core: print FB sizes, along with ranges gpu: nova-core: add FbRange.len() and use it in boot.rs gpu: nova-core: Hopper/Blackwell: basic GPU identification gpu: nova-core: factor .fwsignature* selection into a new find_gsp_sigs_section() gpu: nova-core: use GPU Architecture to simplify HAL selections gpu: nova-core: apply the one "use" item per line policy to commands.rs gpu: nova-core: move GPU init and DMA mask setup into Gpu::new() gpu: nova-core: set DMA mask width based on GPU architecture gpu: nova-core: Hopper/Blackwell: skip GFW boot waiting gpu: nova-core: move firmware image parsing code to firmware.rs gpu: nova-core: factor out an elf_str() function gpu: nova-core: don't assume 64-bit firmware images gpu: nova-core: add support for 32-bit firmware images gpu: nova-core: add auto-detection of 32-bit, 64-bit firmware images gpu: nova-core: Hopper/Blackwell: add FMC firmware image, in support of FSP gpu: nova-core: Hopper/Blackwell: add FSP falcon engine stub gpu: nova-core: Hopper/Blackwell: add FSP falcon EMEM operations gpu: nova-core: Hopper/Blackwell: add FSP message infrastructure rust: ptr: add const_align_up() and enable inline_const feature gpu: nova-core: Hopper/Blackwell: calculate reserved FB heap size gpu: nova-core: add MCTP/NVDM protocol types for firmware communication gpu: nova-core: Hopper/Blackwell: add FSP secure boot completion waiting gpu: nova-core: Hopper/Blackwell: add FSP message structures gpu: nova-core: Hopper/Blackwell: add FMC signature extraction gpu: nova-core: Hopper/Blackwell: add FSP send/receive messaging gpu: nova-core: Hopper/Blackwell: add FspCotVersion type gpu: nova-core: Hopper/Blackwell: larger non-WPR heap gpu: nova-core: Hopper/Blackwell: add FSP Chain of Trust boot gpu: nova-core: Blackwell: use correct sysmem flush registers gpu: nova-core: Hopper/Blackwell: larger WPR2 (GSP) heap gpu: nova-core: refactor SEC2 booter loading into BooterFirmware::run() gpu: nova-core: Hopper/Blackwell: add GSP lockdown release polling gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror gpu: nova-core: Hopper/Blackwell: integrate FSP boot path into boot() rust: sizes: add u64 variants of SZ_* constants gpu: nova-core: use SZ_*_U64 constants from kernel::sizes drivers/gpu/nova-core/driver.rs | 32 +- drivers/gpu/nova-core/falcon.rs | 1 + drivers/gpu/nova-core/falcon/fsp.rs | 222 ++++++++++ drivers/gpu/nova-core/falcon/hal.rs | 20 +- drivers/gpu/nova-core/fb.rs | 123 ++++-- drivers/gpu/nova-core/fb/hal.rs | 38 +- drivers/gpu/nova-core/fb/hal/ga102.rs | 2 +- drivers/gpu/nova-core/fb/hal/gb100.rs | 75 ++++ drivers/gpu/nova-core/fb/hal/gb202.rs | 62 +++ drivers/gpu/nova-core/fb/hal/gh100.rs | 38 ++ drivers/gpu/nova-core/firmware.rs | 186 ++++++++ drivers/gpu/nova-core/firmware/booter.rs | 35 +- drivers/gpu/nova-core/firmware/fsp.rs | 46 ++ drivers/gpu/nova-core/firmware/gsp.rs | 140 ++---- drivers/gpu/nova-core/fsp.rs | 525 +++++++++++++++++++++++ drivers/gpu/nova-core/gpu.rs | 119 ++++- drivers/gpu/nova-core/gsp/boot.rs | 318 ++++++++++---- drivers/gpu/nova-core/gsp/commands.rs | 8 +- drivers/gpu/nova-core/gsp/fw.rs | 95 ++-- drivers/gpu/nova-core/gsp/fw/commands.rs | 32 +- drivers/gpu/nova-core/mctp.rs | 105 +++++ drivers/gpu/nova-core/nova_core.rs | 2 + drivers/gpu/nova-core/regs.rs | 103 ++++- rust/kernel/ptr.rs | 27 ++ rust/kernel/sizes.rs | 51 +++ scripts/Makefile.build | 2 +- 26 files changed, 2098 insertions(+), 309 deletions(-) create mode 100644 drivers/gpu/nova-core/falcon/fsp.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb100.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gb202.rs create mode 100644 drivers/gpu/nova-core/fb/hal/gh100.rs create mode 100644 drivers/gpu/nova-core/firmware/fsp.rs create mode 100644 drivers/gpu/nova-core/fsp.rs create mode 100644 drivers/gpu/nova-core/mctp.rs base-commit: a95f71ad3e2e224277508e006580c333d0a5fe36 prerequisite-patch-id: 1ec0faa352dab8fa7c0f209474b75cd21931340d -- 2.53.0
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Fri Feb 27, 2026 at 3:37 PM GMT, Gary Guo wrote: Hmm, actually this patch contains updated comment but somehow have LKMM atomics changed back to Rust atomics. Not sure what happens. Anyhow that patch should be picked instead. Best, Gary
{ "author": "\"Gary Guo\" <gary@garyguo.net>", "date": "Fri, 27 Feb 2026 15:41:20 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Add reference counting using kref to the fastrpc_user structure to prevent use-after-free issues when contexts are freed from workqueue after device release. The issue occurs when fastrpc_device_release() frees the user structure while invoke contexts are still pending in the workqueue. When the workqueue later calls fastrpc_context_free(), it attempts to access buf->fl->cctx in fastrpc_buf_free(), leading to a use-after-free: pc : fastrpc_buf_free+0x38/0x80 [fastrpc] lr : fastrpc_context_free+0xa8/0x1b0 [fastrpc] ... fastrpc_context_free+0xa8/0x1b0 [fastrpc] fastrpc_context_put_wq+0x78/0xa0 [fastrpc] process_one_work+0x180/0x450 worker_thread+0x26c/0x388 Implement proper reference counting to fix this: - Initialize kref in fastrpc_device_open() - Take a reference in fastrpc_context_alloc() for each context - Release the reference in fastrpc_context_free() when context is freed - Release the initial reference in fastrpc_device_release() This ensures the user structure remains valid as long as there are contexts holding references to it, preventing the race condition. Fixes: 6cffd79504ce ("misc: fastrpc: Add support for dmabuf exporter") Cc: stable@kernel.org Signed-off-by: Anandu Krishnan E <anandu.e@oss.qualcomm.com> --- drivers/misc/fastrpc.c | 35 +++++++++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 4 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 47356a5d5804..3ababcf327d7 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -310,6 +310,8 @@ struct fastrpc_user { spinlock_t lock; /* lock for allocations */ struct mutex mutex; + /* Reference count */ + struct kref refcount; }; /* Extract SMMU PA from consolidated IOVA */ @@ -497,15 +499,36 @@ static void fastrpc_channel_ctx_put(struct fastrpc_channel_ctx *cctx) kref_put(&cctx->refcount, fastrpc_channel_ctx_free); } +static void fastrpc_user_free(struct kref *ref) +{ + struct fastrpc_user *fl = container_of(ref, struct fastrpc_user, refcount); + + fastrpc_channel_ctx_put(fl->cctx); + mutex_destroy(&fl->mutex); + kfree(fl); +} + +static void fastrpc_user_get(struct fastrpc_user *fl) +{ + kref_get(&fl->refcount); +} + +static void fastrpc_user_put(struct fastrpc_user *fl) +{ + kref_put(&fl->refcount, fastrpc_user_free); +} + static void fastrpc_context_free(struct kref *ref) { struct fastrpc_invoke_ctx *ctx; struct fastrpc_channel_ctx *cctx; + struct fastrpc_user *fl; unsigned long flags; int i; ctx = container_of(ref, struct fastrpc_invoke_ctx, refcount); cctx = ctx->cctx; + fl = ctx->fl; for (i = 0; i < ctx->nbufs; i++) fastrpc_map_put(ctx->maps[i]); @@ -521,6 +544,8 @@ static void fastrpc_context_free(struct kref *ref) kfree(ctx->olaps); kfree(ctx); + /* Release the reference taken in fastrpc_context_alloc() */ + fastrpc_user_put(fl); fastrpc_channel_ctx_put(cctx); } @@ -628,6 +653,8 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( /* Released in fastrpc_context_put() */ fastrpc_channel_ctx_get(cctx); + /* Take a reference to user, released in fastrpc_context_free() */ + fastrpc_user_get(user); ctx->sc = sc; ctx->retval = -1; @@ -658,6 +685,7 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( spin_lock(&user->lock); list_del(&ctx->node); spin_unlock(&user->lock); + fastrpc_user_put(user); fastrpc_channel_ctx_put(cctx); kfree(ctx->maps); kfree(ctx->olaps); @@ -1606,11 +1634,9 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) } fastrpc_session_free(cctx, fl->sctx); - fastrpc_channel_ctx_put(cctx); - - mutex_destroy(&fl->mutex); - kfree(fl); file->private_data = NULL; + /* Release the reference taken in fastrpc_device_open */ + fastrpc_user_put(fl); return 0; } @@ -1654,6 +1680,7 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) spin_lock_irqsave(&cctx->lock, flags); list_add_tail(&fl->user, &cctx->users); spin_unlock_irqrestore(&cctx->lock, flags); + kref_init(&fl->refcount); return 0; } -- 2.34.1
null
null
null
[PATCH v1] misc: fastrpc: Add reference counting for fastrpc_user structure
On Thu, Feb 26, 2026 at 08:41:21PM +0530, Anandu Krishnan E wrote: Please follow https://docs.kernel.org/process/submitting-patches.html#describe-your-changes and start your commit message by clearly establishing the problem, once that's done you can describe the technical solution. But why does it do that? The reason why we need buf->fl->cctx in this context is because we need to mask out the DMA address from the buf->dma_addr (remove the SID bits). If we instead split "dma_addr" into two separate members of struct fastrpc_buf, one dma_addr_t dma_addr and one u64 iova_with_sid (?), then it would be clear throughout the driver which address space we're talking about (is it the SID-adjusted iova space or the dma_addr_t in the particular DMA context). In addition to making this aspect of the driver easier to follow, you no longer need to call fastrpc_ipa_to_dma_addr() in fastrpc_buf_free() (or anywhere else for that matter). You can just pass buf->dma_addr directly to dma_free_coherent(). You're isolating the fact that the firmware needs to see "SID | dma_addr" to just two places in the driver. There's no reason to include a checklist of pseudo-code in the commit message, describe why and the overall design if this isn't obvious. The life cycles at play in this driver is already very hard to reason about. Unrelated question, what does the "fl" abbreviation actually mean? I presume 'f' is for "fastrpc", but what is 'l'? Regards, Bjorn
{ "author": "Bjorn Andersson <andersson@kernel.org>", "date": "Thu, 26 Feb 2026 11:50:11 -0600", "is_openbsd": false, "thread_id": "07d585fe-dfd1-41c9-9c58-b2f9893e572e@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Add reference counting using kref to the fastrpc_user structure to prevent use-after-free issues when contexts are freed from workqueue after device release. The issue occurs when fastrpc_device_release() frees the user structure while invoke contexts are still pending in the workqueue. When the workqueue later calls fastrpc_context_free(), it attempts to access buf->fl->cctx in fastrpc_buf_free(), leading to a use-after-free: pc : fastrpc_buf_free+0x38/0x80 [fastrpc] lr : fastrpc_context_free+0xa8/0x1b0 [fastrpc] ... fastrpc_context_free+0xa8/0x1b0 [fastrpc] fastrpc_context_put_wq+0x78/0xa0 [fastrpc] process_one_work+0x180/0x450 worker_thread+0x26c/0x388 Implement proper reference counting to fix this: - Initialize kref in fastrpc_device_open() - Take a reference in fastrpc_context_alloc() for each context - Release the reference in fastrpc_context_free() when context is freed - Release the initial reference in fastrpc_device_release() This ensures the user structure remains valid as long as there are contexts holding references to it, preventing the race condition. Fixes: 6cffd79504ce ("misc: fastrpc: Add support for dmabuf exporter") Cc: stable@kernel.org Signed-off-by: Anandu Krishnan E <anandu.e@oss.qualcomm.com> --- drivers/misc/fastrpc.c | 35 +++++++++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 4 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 47356a5d5804..3ababcf327d7 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -310,6 +310,8 @@ struct fastrpc_user { spinlock_t lock; /* lock for allocations */ struct mutex mutex; + /* Reference count */ + struct kref refcount; }; /* Extract SMMU PA from consolidated IOVA */ @@ -497,15 +499,36 @@ static void fastrpc_channel_ctx_put(struct fastrpc_channel_ctx *cctx) kref_put(&cctx->refcount, fastrpc_channel_ctx_free); } +static void fastrpc_user_free(struct kref *ref) +{ + struct fastrpc_user *fl = container_of(ref, struct fastrpc_user, refcount); + + fastrpc_channel_ctx_put(fl->cctx); + mutex_destroy(&fl->mutex); + kfree(fl); +} + +static void fastrpc_user_get(struct fastrpc_user *fl) +{ + kref_get(&fl->refcount); +} + +static void fastrpc_user_put(struct fastrpc_user *fl) +{ + kref_put(&fl->refcount, fastrpc_user_free); +} + static void fastrpc_context_free(struct kref *ref) { struct fastrpc_invoke_ctx *ctx; struct fastrpc_channel_ctx *cctx; + struct fastrpc_user *fl; unsigned long flags; int i; ctx = container_of(ref, struct fastrpc_invoke_ctx, refcount); cctx = ctx->cctx; + fl = ctx->fl; for (i = 0; i < ctx->nbufs; i++) fastrpc_map_put(ctx->maps[i]); @@ -521,6 +544,8 @@ static void fastrpc_context_free(struct kref *ref) kfree(ctx->olaps); kfree(ctx); + /* Release the reference taken in fastrpc_context_alloc() */ + fastrpc_user_put(fl); fastrpc_channel_ctx_put(cctx); } @@ -628,6 +653,8 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( /* Released in fastrpc_context_put() */ fastrpc_channel_ctx_get(cctx); + /* Take a reference to user, released in fastrpc_context_free() */ + fastrpc_user_get(user); ctx->sc = sc; ctx->retval = -1; @@ -658,6 +685,7 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( spin_lock(&user->lock); list_del(&ctx->node); spin_unlock(&user->lock); + fastrpc_user_put(user); fastrpc_channel_ctx_put(cctx); kfree(ctx->maps); kfree(ctx->olaps); @@ -1606,11 +1634,9 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) } fastrpc_session_free(cctx, fl->sctx); - fastrpc_channel_ctx_put(cctx); - - mutex_destroy(&fl->mutex); - kfree(fl); file->private_data = NULL; + /* Release the reference taken in fastrpc_device_open */ + fastrpc_user_put(fl); return 0; } @@ -1654,6 +1680,7 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) spin_lock_irqsave(&cctx->lock, flags); list_add_tail(&fl->user, &cctx->users); spin_unlock_irqrestore(&cctx->lock, flags); + kref_init(&fl->refcount); return 0; } -- 2.34.1
null
null
null
[PATCH v1] misc: fastrpc: Add reference counting for fastrpc_user structure
On 2/26/2026 11:20 PM, Bjorn Andersson wrote: sure,  will update the commit message and send as patch v2. I agree with the refactoring direction you’re suggesting, and separating the address spaces does make the driver easier to reason about. That said, the UAF isn’t limited to the buffer free path. fastrpc_context_free() also calls fastrpc_map_put(), which reaches fastrpc_free_map() and still dereferences fl, so addressing only the buffer side doesn’t fully resolve the lifetime issue. So the reference counting is still needed. I will update the scenario in commit message as well. If you think it makes sense, I can take the address‑space refactoring as a separate follow‑up patch and keep this change focused on fixing the lifetime issue. sure, will remove. fl is short for fastrpc file. Regards, Anandu
{ "author": "Anandu Krishnan E <anandu.e@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 19:52:00 +0530", "is_openbsd": false, "thread_id": "07d585fe-dfd1-41c9-9c58-b2f9893e572e@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
Large folios in the page cache depend on the splitting infrastructure from THP. To remove the dependency between large folios and CONFIG_TRANSPARENT_HUGEPAGE, set the min order == max order if THP is disabled. This will make sure the splitting code will not be required when THP is disabled, therefore, removing the dependency between large folios and THP. Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> --- include/linux/pagemap.h | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 09b581c1d878..1bb0d4432d4b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -397,9 +397,7 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline size_t mapping_max_folio_size_supported(void) { - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) - return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER); - return PAGE_SIZE; + return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER); } /* @@ -422,16 +420,17 @@ static inline void mapping_set_folio_order_range(struct address_space *mapping, unsigned int min, unsigned int max) { - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) - return; - if (min > MAX_PAGECACHE_ORDER) min = MAX_PAGECACHE_ORDER; if (max > MAX_PAGECACHE_ORDER) max = MAX_PAGECACHE_ORDER; - if (max < min) + /* Large folios depend on THP infrastructure for splitting. + * If THP is disabled, we cap the max order to min order to avoid + * splitting the folios. + */ + if ((max < min) || !IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) max = min; mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | @@ -463,16 +462,12 @@ static inline void mapping_set_large_folios(struct address_space *mapping) static inline unsigned int mapping_max_folio_order(const struct address_space *mapping) { - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) - return 0; return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX; } static inline unsigned int mapping_min_folio_order(const struct address_space *mapping) { - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) - return 0; return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; } -- 2.50.1
{ "author": "Pankaj Raghav <p.raghav@samsung.com>", "date": "Sat, 6 Dec 2025 04:08:56 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
When THP is disabled, file-backed large folios max order is capped to the min order to avoid using the splitting infrastructure. Currently, splitting calls will create a warning when called with THP disabled. But splitting call does not have to do anything when min order is same as the folio order. So skip the warning in folio split functions if the min order is same as the folio order for file backed folios. Due to issues with circular dependency, move the definition of split function for !CONFIG_TRANSPARENT_HUGEPAGES to mm/memory.c Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> --- include/linux/huge_mm.h | 40 ++++++++-------------------------------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+), 32 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 21162493a0a0..71e309f2d26a 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -612,42 +612,18 @@ can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) { return false; } -static inline int -split_huge_page_to_list_to_order(struct page *page, struct list_head *list, - unsigned int new_order) -{ - VM_WARN_ON_ONCE_PAGE(1, page); - return -EINVAL; -} -static inline int split_huge_page_to_order(struct page *page, unsigned int new_order) -{ - VM_WARN_ON_ONCE_PAGE(1, page); - return -EINVAL; -} +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order); +int split_huge_page_to_order(struct page *page, unsigned int new_order); static inline int split_huge_page(struct page *page) { - VM_WARN_ON_ONCE_PAGE(1, page); - return -EINVAL; -} - -static inline unsigned int min_order_for_split(struct folio *folio) -{ - VM_WARN_ON_ONCE_FOLIO(1, folio); - return 0; -} - -static inline int split_folio_to_list(struct folio *folio, struct list_head *list) -{ - VM_WARN_ON_ONCE_FOLIO(1, folio); - return -EINVAL; + return split_huge_page_to_list_to_order(page, NULL, 0); } -static inline int try_folio_split_to_order(struct folio *folio, - struct page *page, unsigned int new_order) -{ - VM_WARN_ON_ONCE_FOLIO(1, folio); - return -EINVAL; -} +unsigned int min_order_for_split(struct folio *folio); +int split_folio_to_list(struct folio *folio, struct list_head *list); +int try_folio_split_to_order(struct folio *folio, + struct page *page, unsigned int new_order); static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {} static inline void reparent_deferred_split_queue(struct mem_cgroup *memcg) {} diff --git a/mm/memory.c b/mm/memory.c index 6675e87eb7dd..4eccdf72a46e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4020,6 +4020,47 @@ static bool __wp_can_reuse_large_anon_folio(struct folio *folio, { BUILD_BUG(); } + +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) +{ + struct folio *folio = page_folio(page); + unsigned int order = mapping_min_folio_order(folio->mapping); + + if (!folio_test_anon(folio) && order == folio_order(folio)) + return -EINVAL; + + VM_WARN_ON_ONCE_PAGE(1, page); + return -EINVAL; +} + +int split_huge_page_to_order(struct page *page, unsigned int new_order) +{ + return split_huge_page_to_list_to_order(page, NULL, new_order); +} + +int split_folio_to_list(struct folio *folio, struct list_head *list) +{ + unsigned int order = mapping_min_folio_order(folio->mapping); + + if (!folio_test_anon(folio) && order == folio_order(folio)) + return -EINVAL; + + VM_WARN_ON_ONCE_FOLIO(1, folio); + return -EINVAL; +} + +unsigned int min_order_for_split(struct folio *folio) +{ + return split_folio_to_list(folio, NULL); +} + + +int try_folio_split_to_order(struct folio *folio, struct page *page, + unsigned int new_order) +{ + return split_folio_to_list(folio, NULL); +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static bool wp_can_reuse_anon_folio(struct folio *folio, -- 2.50.1
{ "author": "Pankaj Raghav <p.raghav@samsung.com>", "date": "Sat, 6 Dec 2025 04:08:57 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
Now that dependency between CONFIG_TRANSPARENT_HUGEPAGES and large folios are removed, enable LBS devices even when THP config is disabled. Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> --- include/linux/blkdev.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 70b671a9a7f7..b6379d73f546 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -270,16 +270,11 @@ static inline dev_t disk_devt(struct gendisk *disk) return MKDEV(disk->major, disk->first_minor); } -#ifdef CONFIG_TRANSPARENT_HUGEPAGE /* * We should strive for 1 << (PAGE_SHIFT + MAX_PAGECACHE_ORDER) * however we constrain this to what we can validate and test. */ #define BLK_MAX_BLOCK_SIZE SZ_64K -#else -#define BLK_MAX_BLOCK_SIZE PAGE_SIZE -#endif - /* blk_validate_limits() validates bsize, so drivers don't usually need to */ static inline int blk_validate_block_size(unsigned long bsize) -- 2.50.1
{ "author": "Pankaj Raghav <p.raghav@samsung.com>", "date": "Sat, 6 Dec 2025 04:08:58 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 12/6/25 04:08, Pankaj Raghav wrote: The description is actually misleading. It's not that you remove the dependency from THP for large folios _in general_ (the CONFIG_THP is retained in this patch). Rather you remove the dependency for large folios _for the block layer_. And that should be make explicit in the description, otherwise the description and the patch doesn't match. Cheers, Hannes -- Dr. Hannes Reinecke Kernel Storage Architect hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
{ "author": "Hannes Reinecke <hare@suse.de>", "date": "Tue, 9 Dec 2025 08:45:46 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 5 Dec 2025, at 22:08, Pankaj Raghav wrote: But are large folios really created? IIUC, in do_sync_mmap_readahead(), when THP is disabled, force_thp_readahead is never set to true and later ra->order is set to 0. Oh, page_cache_ra_order() later bumps new_order to mapping_min_folio_order(). So large folios are created there. I wonder if core-mm should move mTHP code out of CONFIG_TRANSPARENT_HUGEPAGE and mTHP might just work. Hmm, folio split might need to be moved out of mm/huge_memory.c in that case. khugepaged should work for mTHP without CONFIG_TRANSPARENT_HUGEPAGE as well. OK, for anon folios, the changes might be more involved. Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Tue, 09 Dec 2025 11:03:23 -0500", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 12/9/25 13:15, Hannes Reinecke wrote: Hmm, that is not what I am doing. This has nothing to do with the block layer directly. I mentioned this in the cover letter but I can reiterate it again. Large folios depended on THP infrastructure when it was introduced. When we added added LBS support to the block layer, we introduced an indirect dependency on CONFIG_THP. When we disabled config_THP and had a block device logical block size > page size, we ran into a panic. That was fixed here[1]. If this patch is upstreamed, then we can disable THP but still have a LBS drive attached without any issues. Baolin added another CONFIG_THP block in ext4 [2]. With this support, we don't need to sprinkle THP where file backed large folios are used. Happy to discuss this in LPC (if you are attending)! [1] https://lore.kernel.org/all/20250704092134.289491-1-p.raghav@samsung.com/ [2] https://lwn.net/ml/all/20251121090654.631996-25-libaokun@huaweicloud.com/ -- Pankaj
{ "author": "Pankaj Raghav <kernel@pankajraghav.com>", "date": "Tue, 9 Dec 2025 22:03:40 +0530", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 12/9/25 17:33, Pankaj Raghav wrote: Yes, and no. That patch limited the maximum blocksize without THP to 4k, so effectively disabling LBS. But this is what I meant. We do _not_ disable the dependency on THP for LBS, we just remove checks for the config option itself in the block layer code. The actual dependency on THP will remain as LBS will only be supported if THP is enabled. The very first presentation on the first day in the CXL track. Yes :-) Let's discuss there; would love to figure out if we cannot remove the actual dependency on THP, too. Cheers, Hannes -- Dr. Hannes Reinecke Kernel Storage Architect hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
{ "author": "Hannes Reinecke <hare@suse.de>", "date": "Wed, 10 Dec 2025 01:38:09 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Tue, Dec 09, 2025 at 11:03:23AM -0500, Zi Yan wrote: I think this is the key question to be discussed at LPC. How much of the current THP code should we say "OK, this is large folio support and everybody needs it" and how much is "This is PMD (or mTHP) support; this architecture doesn't have it, we don't need to compile it in".
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Wed, 10 Dec 2025 04:27:17 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 9 Dec 2025, at 23:27, Matthew Wilcox wrote: I am not going, so would like to get a summary afterwards. :) I agree with most of it, except mTHP part. mTHP should be part of large folio, since I see mTHP is anon equivalent to file backed large folio. Both are a >0 order folio mapped by PTEs (ignoring to-be-implemented multi-PMD mapped large folios for now). Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Wed, 10 Dec 2025 11:37:51 -0500", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Wed, Dec 10, 2025 at 11:37:51AM -0500, Zi Yan wrote: You can join the fun at meet.lpc.events, or there's apparently a youtube stream. Maybe we disagree about what words mean ;-) When I said "mTHP" what I meant was "support for TLB entries which cover more than one page". I have no objection to supporting large folio allocation for anon memory because I think that's beneficial even if there's no hardware support for TLB entries that cover intermediate sizes between PMD and PTE.
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Thu, 11 Dec 2025 07:37:57 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Sat, Dec 06, 2025 at 04:08:55AM +0100, Pankaj Raghav wrote: Here's an argument. The one remaining caller of add_to_page_cache_lru() is ramfs_nommu_expand_for_mapping(). Attached is a patch which eliminates it ... but it doesn't compile because folio_split() is undefined on nommu. So either we need to reimplement all the good stuff that folio_split() does for us, or we need to make folio_split() available on nommu. fs/ramfs/file-nommu.c | 53 ++++++++++++++++---------------------------------- 1 file changed, 17 insertions(+), 36 deletions(-) diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c index 0f8e838ece07..dd789e161720 100644 --- a/fs/ramfs/file-nommu.c +++ b/fs/ramfs/file-nommu.c @@ -61,8 +61,8 @@ const struct inode_operations ramfs_file_inode_operations = { */ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize) { - unsigned long npages, xpages, loop; - struct page *pages; + unsigned long npages; + struct folio *folio; unsigned order; void *data; int ret; @@ -79,49 +79,30 @@ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize) i_size_write(inode, newsize); - /* allocate enough contiguous pages to be able to satisfy the - * request */ - pages = alloc_pages(gfp, order); - if (!pages) + folio = folio_alloc(gfp, order); + if (!folio) return -ENOMEM; - /* split the high-order page into an array of single pages */ - xpages = 1UL << order; - npages = (newsize + PAGE_SIZE - 1) >> PAGE_SHIFT; - - split_page(pages, order); - - /* trim off any pages we don't actually require */ - for (loop = npages; loop < xpages; loop++) - __free_page(pages + loop); + ret = filemap_add_folio(inode->i_mapping, folio, 0, gfp); + if (ret < 0) + goto out; /* clear the memory we allocated */ + npages = (newsize + PAGE_SIZE - 1) >> PAGE_SHIFT; newsize = PAGE_SIZE * npages; - data = page_address(pages); + data = folio_address(folio); memset(data, 0, newsize); - /* attach all the pages to the inode's address space */ - for (loop = 0; loop < npages; loop++) { - struct page *page = pages + loop; - - ret = add_to_page_cache_lru(page, inode->i_mapping, loop, - gfp); - if (ret < 0) - goto add_error; - - /* prevent the page from being discarded on memory pressure */ - SetPageDirty(page); - SetPageUptodate(page); - - unlock_page(page); - put_page(page); - } + folio_mark_dirty(folio); + folio_mark_uptodate(folio); - return 0; + /* trim off any pages we don't actually require */ + if (!is_power_of_2(npages)) + folio_split(folio, 0, folio_page(folio, npages), NULL); + folio_unlock(folio); -add_error: - while (loop < npages) - __free_page(pages + loop++); +out: + folio_put(folio); return ret; }
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 27 Feb 2026 05:31:38 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 2/27/26 06:31, Matthew Wilcox wrote: I guess it would be rather trivial to just replace add_to_page_cache_lru() by filemap_add_folio() in below code. In the current code base that should work just great unless I am missing something important. folio splitting usually involves unmapping pages, which is rather cumbersome on nommu ;) So we'd have to think about that and the implications. Could someone stumble over the large folio after already adding it to the pagecache, but before splitting it? I guess we'd need to hold the folio lock. ramfs_nommu_expand_for_mapping() is all about allocating memory, not splitting something that might already in use somewhere. So I folio_split() on nommu is a bit weird in that context. When it comes to allocating memory, I would assume that it would be better (and faster!) to a) allocate a frozen high-order page b) Create the (large) folios directly on chunks of the frozen page, and add them through filemap_add_folio(). We'd have a function that consumes a suitable page range and turns it into a folio (later allocates memdesc). c) Return all unused frozen bits to the page allocator -- Cheers, David
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Fri, 27 Feb 2026 09:45:07 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series is a part of the LPC talk[4], and I am sending the RFC series to start the discussion. There are multiple solutions to solve this problem and this is one of them with minimal changes. I plan on discussing possible other solutions at the talk. Based on my investigation, the only feature large folios depend on is the THP splitting infrastructure. Either during truncation or memory pressure when the large folio has to be split, then THP's splitting infrastructure is used to split them into min order folio chunks. In this approach, we restrict the maximum order of the large folio to minimum order to ensure we never use the splitting infrastructure when THP is disabled. I disabled THP, and ran xfstests on XFS with 16k, 32k and 64k blocksizes and the changes seems to survive the test without any issues. Looking forward to some productive discussion. P.S: Thanks to Zi, David and willy for all the ideas they provided to solve this problem. [1] https://lore.kernel.org/linux-mm/731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local/ [2] https://lore.kernel.org/all/aGfNKGBz9lhuK1AF@casper.infradead.org/ [3] https://lore.kernel.org/linux-ext4/20251110043226.GD2988753@mit.edu/ [4] https://lpc.events/event/19/contributions/2139/ Pankaj Raghav (3): filemap: set max order to be min order if THP is disabled huge_memory: skip warning if min order and folio order are same in split blkdev: remove CONFIG_TRANSPARENT_HUGEPAGES dependency for LBS devices include/linux/blkdev.h | 5 ----- include/linux/huge_mm.h | 40 ++++++++-------------------------------- include/linux/pagemap.h | 17 ++++++----------- mm/memory.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 48 deletions(-) base-commit: e4c4d9892021888be6d874ec1be307e80382f431 -- 2.50.1
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Fri, Feb 27, 2026 at 09:45:07AM +0100, David Hildenbrand (Arm) wrote: In the Ottawa interpretation, that's true, but I'd prefer not to revisit this code when transitioning to the New York interpretation. This is the NOMMU code after all, and the less time we spend on it, the better. Depending on your point of view, either everything is mapped on nommu, or nothing is mapped ;-) In any case, the folio is freshly-allocated and locked, so there's no chance anybody has mapped it yet. Well, it is, but it's also exactly what we need to do -- frees folios which are now entirely beyond i_size. And it's code that's also used on MMU systems, and the more code that's shared, the better. Right, we could do that. But that's more code and special code in the nommu codebase.
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 27 Feb 2026 15:19:54 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
All supported SoCs have multiple DMA controllers that can be used with the RSPI peripheral. The current bindings only allow a single pair of RX and TX DMAs. The DMA core allows specifying multiple DMAs with the same name, and it will pick the first available one. There is an exception in the base dt-schema rules specifically for allowing this behavior (dtschema/schemas/dma/dma.yaml). dma-names: anyOf: - uniqueItems: true - items: # Hack around Renesas bindings which repeat entries to support # multiple possible DMA providers enum: [rx, tx] Allow multiple DMAs to have the same name and only restrict the possible names of the DMA channels, not their count. For RZ/T2H and RZ/N2H SoCs, limit the number of DMA channels to 6, as they have 3 DMA controllers. For RZ/V2H and RZ/V2N SoCs, limit the number of DMA channels to 10, as they have 5 DMA controllers. Signed-off-by: Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> --- V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * new patch .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/Documentation/devicetree/bindings/spi/renesas,rzv2h-rspi.yaml b/Documentation/devicetree/bindings/spi/renesas,rzv2h-rspi.yaml index a588b112e11e..cf8b733b766d 100644 --- a/Documentation/devicetree/bindings/spi/renesas,rzv2h-rspi.yaml +++ b/Documentation/devicetree/bindings/spi/renesas,rzv2h-rspi.yaml @@ -58,12 +58,16 @@ properties: - const: tresetn dmas: - maxItems: 2 + minItems: 2 + maxItems: 10 dma-names: + minItems: 2 + maxItems: 10 items: - - const: rx - - const: tx + enum: + - rx + - tx power-domains: maxItems: 1 @@ -121,6 +125,12 @@ allOf: resets: false reset-names: false + dmas: + maxItems: 6 + + dma-names: + maxItems: 6 + unevaluatedProperties: false examples: -- 2.52.0
{ "author": "Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Wed, 28 Jan 2026 23:51:30 +0200", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
RZ/N2H (R9A09G087) has three DMA controllers that can be used by peripherals like SPI to offload data transfers from the CPU. Wire up the DMA channels for the SPI peripherals. Signed-off-by: Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> --- V3: * no changes V2: * wire up all DMA controllers arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm64/boot/dts/renesas/r9a09g087.dtsi b/arch/arm64/boot/dts/renesas/r9a09g087.dtsi index 4a1339561332..7d1c669ad262 100644 --- a/arch/arm64/boot/dts/renesas/r9a09g087.dtsi +++ b/arch/arm64/boot/dts/renesas/r9a09g087.dtsi @@ -200,6 +200,10 @@ rspi0: spi@80007000 { clocks = <&cpg CPG_CORE R9A09G087_CLK_PCLKM>, <&cpg CPG_MOD 104>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x267a>, <&dmac0 0x267b>, + <&dmac1 0x267a>, <&dmac1 0x267b>, + <&dmac2 0x267a>, <&dmac2 0x267b>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; @@ -218,6 +222,10 @@ rspi1: spi@80007400 { clocks = <&cpg CPG_CORE R9A09G087_CLK_PCLKM>, <&cpg CPG_MOD 105>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x267f>, <&dmac0 0x2680>, + <&dmac1 0x267f>, <&dmac1 0x2680>, + <&dmac2 0x267f>, <&dmac2 0x2680>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; @@ -236,6 +244,10 @@ rspi2: spi@80007800 { clocks = <&cpg CPG_CORE R9A09G087_CLK_PCLKM>, <&cpg CPG_MOD 106>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x2684>, <&dmac0 0x2685>, + <&dmac1 0x2684>, <&dmac1 0x2685>, + <&dmac2 0x2684>, <&dmac2 0x2685>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; @@ -254,6 +266,10 @@ rspi3: spi@81007000 { clocks = <&cpg CPG_CORE R9A09G087_CLK_PCLKM>, <&cpg CPG_MOD 602>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x2689>, <&dmac0 0x268a>, + <&dmac1 0x2689>, <&dmac1 0x268a>, + <&dmac2 0x2689>, <&dmac2 0x268a>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; -- 2.52.0
{ "author": "Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Wed, 28 Jan 2026 23:51:32 +0200", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
RZ/T2H (R9A09G077) has three DMA controllers that can be used by peripherals like SPI to offload data transfers from the CPU. Wire up the DMA channels for the SPI peripherals. Signed-off-by: Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> --- V3: * no changes V2: * wire up all DMA controllers arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm64/boot/dts/renesas/r9a09g077.dtsi b/arch/arm64/boot/dts/renesas/r9a09g077.dtsi index 14d7fb6f8952..0e44b01a56c7 100644 --- a/arch/arm64/boot/dts/renesas/r9a09g077.dtsi +++ b/arch/arm64/boot/dts/renesas/r9a09g077.dtsi @@ -200,6 +200,10 @@ rspi0: spi@80007000 { clocks = <&cpg CPG_CORE R9A09G077_CLK_PCLKM>, <&cpg CPG_MOD 104>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x267a>, <&dmac0 0x267b>, + <&dmac1 0x267a>, <&dmac1 0x267b>, + <&dmac2 0x267a>, <&dmac2 0x267b>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; @@ -218,6 +222,10 @@ rspi1: spi@80007400 { clocks = <&cpg CPG_CORE R9A09G077_CLK_PCLKM>, <&cpg CPG_MOD 105>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x267f>, <&dmac0 0x2680>, + <&dmac1 0x267f>, <&dmac1 0x2680>, + <&dmac2 0x267f>, <&dmac2 0x2680>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; @@ -236,6 +244,10 @@ rspi2: spi@80007800 { clocks = <&cpg CPG_CORE R9A09G077_CLK_PCLKM>, <&cpg CPG_MOD 106>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x2684>, <&dmac0 0x2685>, + <&dmac1 0x2684>, <&dmac1 0x2685>, + <&dmac2 0x2684>, <&dmac2 0x2685>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; @@ -254,6 +266,10 @@ rspi3: spi@81007000 { clocks = <&cpg CPG_CORE R9A09G077_CLK_PCLKM>, <&cpg CPG_MOD 602>; clock-names = "pclk", "pclkspi"; + dmas = <&dmac0 0x2689>, <&dmac0 0x268a>, + <&dmac1 0x2689>, <&dmac1 0x268a>, + <&dmac2 0x2689>, <&dmac2 0x268a>; + dma-names = "rx", "tx", "rx", "tx", "rx", "tx"; power-domains = <&cpg>; #address-cells = <1>; #size-cells = <0>; -- 2.52.0
{ "author": "Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Wed, 28 Jan 2026 23:51:31 +0200", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, Jan 28, 2026 at 11:51:30PM +0200, Cosmin Tanislav wrote: What's the rationale behind not setting minItems to 6 here and to 10 here? Do any of the spi controllers on these SoCs not have the ability to use all of the available dma controllers?
{ "author": "Conor Dooley <conor@kernel.org>", "date": "Thu, 29 Jan 2026 17:44:59 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
I left minItems to 2 in case it is necessary to wire up SPI to only a subset of the DMA controllers, maybe for performance reasons in a board-specific dts? I know that dts is only supposed to describe the hardware itself, but for now this would be the only way to pre-select which DMA controller is used for a specific IP. Let me know your thoughts.
{ "author": "Cosmin-Gabriel Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Thu, 29 Jan 2026 17:55:21 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Thu, Jan 29, 2026 at 05:55:21PM +0000, Cosmin-Gabriel Tanislav wrote: Yeah, I can buy that argument. Acked-by: Conor Dooley <conor.dooley@microchip.com> pw-bot: not-applicable
{ "author": "Conor Dooley <conor@kernel.org>", "date": "Thu, 29 Jan 2026 18:03:37 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On 28/01/2026 22:51, Cosmin Tanislav wrote: As pointed out by Renesas, this is not correct or finished. I don't understand why Renesas people don't review THEIR own code instead, but send a patch correcting other un-merged patch. Really, start working on each other submissions. NAK Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzk@kernel.org>", "date": "Wed, 18 Feb 2026 08:49:48 +0100", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, 28 Jan 2026 23:51:29 +0200, Cosmin Tanislav wrote: Applied to https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git for-next Thanks! [1/3] dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs commit: 4d28f38f64ef69ab27839069ef3346c3c878d137 All being well this means that it will be integrated into the linux-next tree (usually sometime in the next 24 hours) and sent to Linus during the next merge window (or sooner if it is a bug fix), however if problems are discovered then the patch may be dropped or reverted. You may get further e-mails resulting from automated or manual testing and review of the tree, please engage with people reporting problems and send followup patches addressing any issues that are reported if needed. If any updates are required or you are submitting further changes they should be sent as incremental updates against current git, existing patches will not be replaced. Please add any relevant lists and maintainers to the CCs when replying to this mail. Thanks, Mark
{ "author": "Mark Brown <broonie@kernel.org>", "date": "Wed, 25 Feb 2026 19:07:41 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, 28 Jan 2026 at 22:52, Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> wrote: Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> i.e. will queue in renesas-devel for v7.1. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
{ "author": "Geert Uytterhoeven <geert@linux-m68k.org>", "date": "Fri, 27 Feb 2026 15:54:34 +0100", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/20260105114445.878262-1-cosmin-gabriel.tanislav.xa@renesas.com/#t V3: * impose proper maxItems for each device * impose maxItems for dmas property V2: * drop patches picked up by Mark * add new dt-bindings patch to allow multiple DMAs * wire up all DMA controllers for every SPI controller Cosmin Tanislav (3): dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs arm64: dts: renesas: r9a09g077: wire up DMA support for SPI arm64: dts: renesas: r9a09g087: wire up DMA support for SPI .../bindings/spi/renesas,rzv2h-rspi.yaml | 16 +++++++++++++--- arch/arm64/boot/dts/renesas/r9a09g077.dtsi | 16 ++++++++++++++++ arch/arm64/boot/dts/renesas/r9a09g087.dtsi | 16 ++++++++++++++++ 3 files changed, 45 insertions(+), 3 deletions(-) -- 2.52.0
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, 28 Jan 2026 at 22:52, Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> wrote: Thanks, will queue in renesas-devel for v7.1. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
{ "author": "Geert Uytterhoeven <geert@linux-m68k.org>", "date": "Fri, 27 Feb 2026 15:55:39 +0100", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
We want isolation of misplaced folios to work in contexts where VMA isn't available, typically when performing migrations from a kernel thread context. In order to prepare for that, allow migrate_misplaced_folio_prepare() to be called with a NULL VMA. When migrate_misplaced_folio_prepare() is called with non-NULL VMA, it will check if the folio is mapped shared and that requires holding PTL lock. This path isn't taken when the function is invoked with NULL VMA (migration outside of process context). Therefore, when VMA == NULL, migrate_misplaced_folio_prepare() does not require the caller to hold the PTL. Signed-off-by: Bharata B Rao <bharata@amd.com> --- mm/migrate.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 5169f9717f60..70f8f3ad4fd8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2652,7 +2652,8 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src, /* * Prepare for calling migrate_misplaced_folio() by isolating the folio if - * permitted. Must be called with the PTL still held. + * permitted. Must be called with the PTL still held if called with a non-NULL + * vma. */ int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node) @@ -2669,7 +2670,7 @@ int migrate_misplaced_folio_prepare(struct folio *folio, * See folio_maybe_mapped_shared() on possible imprecision * when we cannot easily detect if a folio is shared. */ - if ((vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio)) + if (vma && (vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio)) return -EACCES; /* -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:34 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
From: Gregory Price <gourry@gourry.net> Tiered memory systems often require migrating multiple folios at once. Currently, migrate_misplaced_folio() handles only one folio per call, which is inefficient for batch operations. This patch introduces migrate_misplaced_folios_batch(), a batch variant that leverages migrate_pages() internally for improved performance. The caller must isolate folios beforehand using migrate_misplaced_folio_prepare(). On return, the folio list will be empty regardless of success or failure. This function will be used by pghot kmigrated thread. Signed-off-by: Gregory Price <gourry@gourry.net> [Rewrote commit description] Signed-off-by: Bharata B Rao <bharata@amd.com> --- include/linux/migrate.h | 6 ++++++ mm/migrate.c | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 26ca00c325d9..f28326b88592 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -103,6 +103,7 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node); int migrate_misplaced_folio(struct folio *folio, int node); +int migrate_misplaced_folios_batch(struct list_head *folio_list, int node); #else static inline int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node) @@ -113,6 +114,11 @@ static inline int migrate_misplaced_folio(struct folio *folio, int node) { return -EAGAIN; /* can't migrate now */ } +static inline int migrate_misplaced_folios_batch(struct list_head *folio_list, + int node) +{ + return -EAGAIN; /* can't migrate now */ +} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_MIGRATION diff --git a/mm/migrate.c b/mm/migrate.c index 70f8f3ad4fd8..4a3a9a4ff435 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2747,5 +2747,41 @@ int migrate_misplaced_folio(struct folio *folio, int node) BUG_ON(!list_empty(&migratepages)); return nr_remaining ? -EAGAIN : 0; } + +/** + * migrate_misplaced_folios_batch() - Batch variant of migrate_misplaced_folio. + * Attempts to migrate a folio list to the specified destination. + * @folio_list: Isolated list of folios to be batch-migrated. + * @node: The NUMA node ID to where the folios should be migrated. + * + * Caller is expected to have isolated the folios by calling + * migrate_misplaced_folio_prepare(), which will result in an + * elevated reference count on the folio. + * + * This function will un-isolate the folios, drop the elevated reference + * and remove them from the list before returning. + * + * Return: 0 on success and -EAGAIN on failure or partial migration. + * On return, @folio_list will be empty regardless of success/failure. + */ +int migrate_misplaced_folios_batch(struct list_head *folio_list, int node) +{ + pg_data_t *pgdat = NODE_DATA(node); + unsigned int nr_succeeded = 0; + int nr_remaining; + + nr_remaining = migrate_pages(folio_list, alloc_misplaced_dst_folio, + NULL, node, MIGRATE_ASYNC, + MR_NUMA_MISPLACED, &nr_succeeded); + if (nr_remaining) + putback_movable_pages(folio_list); + + if (nr_succeeded) { + count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); + mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded); + } + WARN_ON(!list_empty(folio_list)); + return nr_remaining ? -EAGAIN : 0; +} #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA */ -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:35 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
This introduces a subsystem for collecting memory access information from different sources. It maintains the hotness information based on the access history and time of access. Additionally, it provides per-lower-tier-node kernel threads (named kmigrated) that periodically promote the pages that are eligible for promotion. Sub-systems that generate hot page access info can report that using this API: int pghot_record_access(unsigned long pfn, int nid, int src, unsigned long time) @pfn: The PFN of the memory accessed @nid: The accessing NUMA node ID @src: The temperature source (subsystem) that generated the access info @time: The access time in jiffies Some temperature sources may not provide the nid from which the page was accessed. This is true for sources that use page table scanning for PTE Accessed bit. For such sources, a configurable/default toptier node is used as promotion target. The hotness information is stored for every page of lower tier memory in a u8 variable (1 byte) that is part of mem_section data structure. kmigrated is a per-lower-tier-node kernel thread that migrates the folios marked for migration in batches. Each kmigrated thread walks the PFN range spanning its node and checks for potential migration candidates. A bunch of tunables for enabling different hotness sources, setting target_nid, frequency threshold are provided in debugfs. Signed-off-by: Bharata B Rao <bharata@amd.com> --- Documentation/admin-guide/mm/pghot.txt | 84 ++++++ include/linux/mmzone.h | 21 ++ include/linux/pghot.h | 94 +++++++ include/linux/vm_event_item.h | 6 + mm/Kconfig | 14 + mm/Makefile | 1 + mm/mm_init.c | 10 + mm/pghot-default.c | 73 +++++ mm/pghot-tunables.c | 189 +++++++++++++ mm/pghot.c | 370 +++++++++++++++++++++++++ mm/vmstat.c | 6 + 11 files changed, 868 insertions(+) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 include/linux/pghot.h create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c diff --git a/Documentation/admin-guide/mm/pghot.txt b/Documentation/admin-guide/mm/pghot.txt new file mode 100644 index 000000000000..01291b72e7ab --- /dev/null +++ b/Documentation/admin-guide/mm/pghot.txt @@ -0,0 +1,84 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================= +PGHOT: Hot Page Tracking Tunables +================================= + +Overview +======== +The PGHOT subsystem tracks frequently accessed pages in lower-tier memory and +promotes them to faster tiers. It uses per-PFN hotness metadata and asynchronous +migration via per-node kernel threads (kmigrated). + +This document describes tunables available via **debugfs** and **sysctl** for +PGHOT. + +Debugfs Interface +================= +Path: /sys/kernel/debug/pghot/ + +1. **enabled_sources** + - Bitmask to enable/disable hotness sources. + - Bits: + - 0: Hardware hints (value 0x1) + - 1: Page table scan (value 0x2) + - 2: Hint faults (value 0x4) + - Default: 0 (disabled) + - Example: + # echo 0x7 > /sys/kernel/debug/pghot/enabled_sources + Enables all sources. + +2. **target_nid** + - Toptier NUMA node ID to which hot pages should be promoted when source + does not provide nid. Used when hotness source can't provide accessing + NID or when the tracking mode is default. + - Default: 0 + - Example: + # echo 1 > /sys/kernel/debug/pghot/target_nid + +3. **freq_threshold** + - Minimum access frequency before a page is marked ready for promotion. + - Range: 1 to 3 + - Default: 2 + - Example: + # echo 3 > /sys/kernel/debug/pghot/freq_threshold + +4. **kmigrated_sleep_ms** + - Sleep interval (ms) for kmigrated thread between scans. + - Default: 100 + +5. **kmigrated_batch_nr** + - Maximum number of folios migrated in one batch. + - Default: 512 + +Sysctl Interface +================ +1. pghot_promote_freq_window_ms + +Path: /proc/sys/vm/pghot_promote_freq_window_ms + +- Controls the time window (in ms) for counting access frequency. A page is + considered hot only when **freq_threshold** number of accesses occur with + this time period. +- Default: 4000 (4 seconds) +- Example: + # sysctl vm.pghot_promote_freq_window_ms=3000 + +Vmstat Counters +=============== +Following vmstat counters provide some stats about pghot subsystem. + +Path: /proc/vmstat + +1. **pghot_recorded_accesses** + - Number of total hot page accesses recorded by pghot. + +2. **pghot_recorded_hwhints** + - Number of recorded accesses reported by hwhints source. + +3. **pghot_recorded_pgtscans** + - Number of recorded accesses reported by PTE A-bit based source. + +4. **pghot_recorded_hintfaults** + - Number of recorded accesses reported by NUMA Balancing based + hotness source. diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 75ef7c9f9307..22e08befb096 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1064,6 +1064,7 @@ enum pgdat_flags { * many pages under writeback */ PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */ + PGDAT_KMIGRATED_ACTIVATE, /* activates kmigrated */ }; enum zone_flags { @@ -1518,6 +1519,10 @@ typedef struct pglist_data { #ifdef CONFIG_MEMORY_FAILURE struct memory_failure_stats mf_stats; #endif +#ifdef CONFIG_PGHOT + struct task_struct *kmigrated; + wait_queue_head_t kmigrated_wait; +#endif } pg_data_t; #define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages) @@ -1916,12 +1921,28 @@ struct mem_section { unsigned long section_mem_map; struct mem_section_usage *usage; +#ifdef CONFIG_PGHOT + /* + * Per-PFN hotness data for this section. + * Array of phi_t (u8 in default mode). + * LSB is used as PGHOT_SECTION_HOT_BIT flag. + */ + void *hot_map; +#endif #ifdef CONFIG_PAGE_EXTENSION /* * If SPARSEMEM, pgdat doesn't have page_ext pointer. We use * section. (see page_ext.h about this.) */ struct page_ext *page_ext; +#endif + /* + * Padding to maintain consistent mem_section size when exactly + * one of PGHOT or PAGE_EXTENSION is enabled. This ensures + * optimal alignment regardless of configuration. + */ +#if (defined(CONFIG_PGHOT) && !defined(CONFIG_PAGE_EXTENSION)) || \ + (!defined(CONFIG_PGHOT) && defined(CONFIG_PAGE_EXTENSION)) unsigned long pad; #endif /* diff --git a/include/linux/pghot.h b/include/linux/pghot.h new file mode 100644 index 000000000000..88e57aab697b --- /dev/null +++ b/include/linux/pghot.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_PGHOT_H +#define _LINUX_PGHOT_H + +/* Page hotness temperature sources */ +enum pghot_src { + PGHOT_HW_HINTS, + PGHOT_PGTABLE_SCAN, + PGHOT_HINT_FAULT, +}; + +#ifdef CONFIG_PGHOT +#include <linux/static_key.h> + +extern unsigned int pghot_target_nid; +extern unsigned int pghot_src_enabled; +extern unsigned int pghot_freq_threshold; +extern unsigned int kmigrated_sleep_ms; +extern unsigned int kmigrated_batch_nr; +extern unsigned int sysctl_pghot_freq_window; + +void pghot_debug_init(void); + +DECLARE_STATIC_KEY_FALSE(pghot_src_hwhints); +DECLARE_STATIC_KEY_FALSE(pghot_src_pgtscans); +DECLARE_STATIC_KEY_FALSE(pghot_src_hintfaults); + +/* + * Bit positions to enable individual sources in pghot/records_enabled + * of debugfs. + */ +enum pghot_src_enabled { + PGHOT_HWHINTS_BIT = 0, + PGHOT_PGTSCAN_BIT, + PGHOT_HINTFAULT_BIT, + PGHOT_MAX_BIT +}; + +#define PGHOT_HWHINTS_ENABLED BIT(PGHOT_HWHINTS_BIT) +#define PGHOT_PGTSCAN_ENABLED BIT(PGHOT_PGTSCAN_BIT) +#define PGHOT_HINTFAULT_ENABLED BIT(PGHOT_HINTFAULT_BIT) +#define PGHOT_SRC_ENABLED_MASK GENMASK(PGHOT_MAX_BIT - 1, 0) + +#define PGHOT_DEFAULT_FREQ_THRESHOLD 2 + +#define KMIGRATED_DEFAULT_SLEEP_MS 100 +#define KMIGRATED_DEFAULT_BATCH_NR 512 + +#define PGHOT_DEFAULT_NODE 0 + +#define PGHOT_DEFAULT_FREQ_WINDOW (4 * MSEC_PER_SEC) + +/* + * Bits 0-6 are used to store frequency and time. + * Bit 7 is used to indicate the page is ready for migration. + */ +#define PGHOT_MIGRATE_READY 7 + +#define PGHOT_FREQ_WIDTH 2 +/* Bucketed time is stored in 5 bits which can represent up to 4s with HZ=1000 */ +#define PGHOT_TIME_BUCKETS_WIDTH 7 +#define PGHOT_TIME_WIDTH 5 +#define PGHOT_NID_WIDTH 10 + +#define PGHOT_FREQ_SHIFT 0 +#define PGHOT_TIME_SHIFT (PGHOT_FREQ_SHIFT + PGHOT_FREQ_WIDTH) + +#define PGHOT_FREQ_MASK GENMASK(PGHOT_FREQ_WIDTH - 1, 0) +#define PGHOT_TIME_MASK GENMASK(PGHOT_TIME_WIDTH - 1, 0) +#define PGHOT_TIME_BUCKETS_MASK (PGHOT_TIME_MASK << PGHOT_TIME_BUCKETS_WIDTH) + +#define PGHOT_NID_MAX ((1 << PGHOT_NID_WIDTH) - 1) +#define PGHOT_FREQ_MAX ((1 << PGHOT_FREQ_WIDTH) - 1) +#define PGHOT_TIME_MAX ((1 << PGHOT_TIME_WIDTH) - 1) + +typedef u8 phi_t; + +#define PGHOT_RECORD_SIZE sizeof(phi_t) + +#define PGHOT_SECTION_HOT_BIT 0 +#define PGHOT_SECTION_HOT_MASK BIT(PGHOT_SECTION_HOT_BIT) + +unsigned long pghot_access_latency(unsigned long old_time, unsigned long time); +bool pghot_update_record(phi_t *phi, int nid, unsigned long now); +int pghot_get_record(phi_t *phi, int *nid, int *freq, unsigned long *time); + +int pghot_record_access(unsigned long pfn, int nid, int src, unsigned long now); +#else +static inline int pghot_record_access(unsigned long pfn, int nid, int src, unsigned long now) +{ + return 0; +} +#endif /* CONFIG_PGHOT */ +#endif /* _LINUX_PGHOT_H */ diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 92f80b4d69a6..5b8fd93b55fd 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -188,6 +188,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, KSTACK_REST, #endif #endif /* CONFIG_DEBUG_STACK_USAGE */ +#ifdef CONFIG_PGHOT + PGHOT_RECORDED_ACCESSES, + PGHOT_RECORD_HWHINTS, + PGHOT_RECORD_PGTSCANS, + PGHOT_RECORD_HINTFAULTS, +#endif /* CONFIG_PGHOT */ NR_VM_EVENT_ITEMS }; diff --git a/mm/Kconfig b/mm/Kconfig index bd0ea5454af8..f4f0147faac5 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1464,6 +1464,20 @@ config PT_RECLAIM config FIND_NORMAL_PAGE def_bool n +config PGHOT + bool "Hot page tracking and promotion" + def_bool n + depends on NUMA && MIGRATION && SPARSEMEM && MMU + help + A sub-system to track page accesses in lower tier memory and + maintain hot page information. Promotes hot pages from lower + tiers to top tier by using the memory access information provided + by various sources. Asynchronous promotion is done by per-node + kernel threads. + + This adds 1 byte of metadata overhead per page in lower-tier + memory nodes. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 2d0570a16e5b..655a27f3a215 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -147,3 +147,4 @@ obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o obj-$(CONFIG_EXECMEM) += execmem.o obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o +obj-$(CONFIG_PGHOT) += pghot.o pghot-tunables.o pghot-default.o diff --git a/mm/mm_init.c b/mm/mm_init.c index fc2a6f1e518f..64109feaa1c3 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1401,6 +1401,15 @@ static void pgdat_init_kcompactd(struct pglist_data *pgdat) static void pgdat_init_kcompactd(struct pglist_data *pgdat) {} #endif +#ifdef CONFIG_PGHOT +static void pgdat_init_kmigrated(struct pglist_data *pgdat) +{ + init_waitqueue_head(&pgdat->kmigrated_wait); +} +#else +static inline void pgdat_init_kmigrated(struct pglist_data *pgdat) {} +#endif + static void __meminit pgdat_init_internals(struct pglist_data *pgdat) { int i; @@ -1410,6 +1419,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) pgdat_init_split_queue(pgdat); pgdat_init_kcompactd(pgdat); + pgdat_init_kmigrated(pgdat); init_waitqueue_head(&pgdat->kswapd_wait); init_waitqueue_head(&pgdat->pfmemalloc_wait); diff --git a/mm/pghot-default.c b/mm/pghot-default.c new file mode 100644 index 000000000000..e0a3b2ed2592 --- /dev/null +++ b/mm/pghot-default.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pghot: Default mode + * + * 1 byte hotness record per PFN. + * Bucketed time and frequency tracked as part of the record. + * Promotion to @pghot_target_nid by default. + */ + +#include <linux/pghot.h> +#include <linux/jiffies.h> + +/* + * @time is regular time, @old_time is bucketed time. + */ +unsigned long pghot_access_latency(unsigned long old_time, unsigned long time) +{ + time &= PGHOT_TIME_BUCKETS_MASK; + old_time <<= PGHOT_TIME_BUCKETS_WIDTH; + + return jiffies_to_msecs((time - old_time) & PGHOT_TIME_BUCKETS_MASK); +} + +bool pghot_update_record(phi_t *phi, int nid, unsigned long now) +{ + phi_t freq, old_freq, hotness, old_hotness, old_time; + phi_t time = now >> PGHOT_TIME_BUCKETS_WIDTH; + + old_hotness = READ_ONCE(*phi); + do { + bool new_window = false; + + hotness = old_hotness; + old_freq = (hotness >> PGHOT_FREQ_SHIFT) & PGHOT_FREQ_MASK; + old_time = (hotness >> PGHOT_TIME_SHIFT) & PGHOT_TIME_MASK; + + if (pghot_access_latency(old_time, now) > sysctl_pghot_freq_window) + new_window = true; + + if (new_window) + freq = 1; + else if (old_freq < PGHOT_FREQ_MAX) + freq = old_freq + 1; + else + freq = old_freq; + + hotness &= ~(PGHOT_FREQ_MASK << PGHOT_FREQ_SHIFT); + hotness &= ~(PGHOT_TIME_MASK << PGHOT_TIME_SHIFT); + + hotness |= (freq & PGHOT_FREQ_MASK) << PGHOT_FREQ_SHIFT; + hotness |= (time & PGHOT_TIME_MASK) << PGHOT_TIME_SHIFT; + + if (freq >= pghot_freq_threshold) + hotness |= BIT(PGHOT_MIGRATE_READY); + } while (unlikely(!try_cmpxchg(phi, &old_hotness, hotness))); + return !!(hotness & BIT(PGHOT_MIGRATE_READY)); +} + +int pghot_get_record(phi_t *phi, int *nid, int *freq, unsigned long *time) +{ + phi_t old_hotness, hotness = 0; + + old_hotness = READ_ONCE(*phi); + do { + if (!(old_hotness & BIT(PGHOT_MIGRATE_READY))) + return -EINVAL; + } while (unlikely(!try_cmpxchg(phi, &old_hotness, hotness))); + + *nid = pghot_target_nid; + *freq = (old_hotness >> PGHOT_FREQ_SHIFT) & PGHOT_FREQ_MASK; + *time = (old_hotness >> PGHOT_TIME_SHIFT) & PGHOT_TIME_MASK; + return 0; +} diff --git a/mm/pghot-tunables.c b/mm/pghot-tunables.c new file mode 100644 index 000000000000..79afbcb1e4f0 --- /dev/null +++ b/mm/pghot-tunables.c @@ -0,0 +1,189 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pghot tunables in debugfs + */ +#include <linux/pghot.h> +#include <linux/memory-tiers.h> +#include <linux/debugfs.h> + +static struct dentry *debugfs_pghot; +static DEFINE_MUTEX(pghot_tunables_lock); + +static ssize_t pghot_freq_th_write(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + char buf[16]; + unsigned int freq; + + if (cnt > 15) + cnt = 15; + + if (copy_from_user(&buf, ubuf, cnt)) + return -EFAULT; + buf[cnt] = '\0'; + + if (kstrtouint(buf, 10, &freq)) + return -EINVAL; + + if (!freq || freq > PGHOT_FREQ_MAX) + return -EINVAL; + + mutex_lock(&pghot_tunables_lock); + pghot_freq_threshold = freq; + mutex_unlock(&pghot_tunables_lock); + + *ppos += cnt; + return cnt; +} + +static int pghot_freq_th_show(struct seq_file *m, void *v) +{ + seq_printf(m, "%d\n", pghot_freq_threshold); + return 0; +} + +static int pghot_freq_th_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, pghot_freq_th_show, NULL); +} + +static const struct file_operations pghot_freq_th_fops = { + .open = pghot_freq_th_open, + .write = pghot_freq_th_write, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; + +static ssize_t pghot_target_nid_write(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + char buf[16]; + unsigned int nid; + + if (cnt > 15) + cnt = 15; + + if (copy_from_user(&buf, ubuf, cnt)) + return -EFAULT; + buf[cnt] = '\0'; + + if (kstrtouint(buf, 10, &nid)) + return -EINVAL; + + if (nid > PGHOT_NID_MAX || !node_online(nid) || !node_is_toptier(nid)) + return -EINVAL; + mutex_lock(&pghot_tunables_lock); + pghot_target_nid = nid; + mutex_unlock(&pghot_tunables_lock); + + *ppos += cnt; + return cnt; +} + +static int pghot_target_nid_show(struct seq_file *m, void *v) +{ + seq_printf(m, "%d\n", pghot_target_nid); + return 0; +} + +static int pghot_target_nid_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, pghot_target_nid_show, NULL); +} + +static const struct file_operations pghot_target_nid_fops = { + .open = pghot_target_nid_open, + .write = pghot_target_nid_write, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; + +static void pghot_src_enabled_update(unsigned int enabled) +{ + unsigned int changed = pghot_src_enabled ^ enabled; + + if (changed & PGHOT_HWHINTS_ENABLED) { + if (enabled & PGHOT_HWHINTS_ENABLED) + static_branch_enable(&pghot_src_hwhints); + else + static_branch_disable(&pghot_src_hwhints); + } + + if (changed & PGHOT_PGTSCAN_ENABLED) { + if (enabled & PGHOT_PGTSCAN_ENABLED) + static_branch_enable(&pghot_src_pgtscans); + else + static_branch_disable(&pghot_src_pgtscans); + } + + if (changed & PGHOT_HINTFAULT_ENABLED) { + if (enabled & PGHOT_HINTFAULT_ENABLED) + static_branch_enable(&pghot_src_hintfaults); + else + static_branch_disable(&pghot_src_hintfaults); + } +} + +static ssize_t pghot_src_enabled_write(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + char buf[16]; + unsigned int enabled; + + if (cnt > 15) + cnt = 15; + + if (copy_from_user(&buf, ubuf, cnt)) + return -EFAULT; + buf[cnt] = '\0'; + + if (kstrtouint(buf, 0, &enabled)) + return -EINVAL; + + if (enabled & ~PGHOT_SRC_ENABLED_MASK) + return -EINVAL; + + mutex_lock(&pghot_tunables_lock); + pghot_src_enabled_update(enabled); + pghot_src_enabled = enabled; + mutex_unlock(&pghot_tunables_lock); + + *ppos += cnt; + return cnt; +} + +static int pghot_src_enabled_show(struct seq_file *m, void *v) +{ + seq_printf(m, "%d\n", pghot_src_enabled); + return 0; +} + +static int pghot_src_enabled_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, pghot_src_enabled_show, NULL); +} + +static const struct file_operations pghot_src_enabled_fops = { + .open = pghot_src_enabled_open, + .write = pghot_src_enabled_write, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; + +void pghot_debug_init(void) +{ + debugfs_pghot = debugfs_create_dir("pghot", NULL); + debugfs_create_file("enabled_sources", 0644, debugfs_pghot, NULL, + &pghot_src_enabled_fops); + debugfs_create_file("target_nid", 0644, debugfs_pghot, NULL, + &pghot_target_nid_fops); + debugfs_create_file("freq_threshold", 0644, debugfs_pghot, NULL, + &pghot_freq_th_fops); + debugfs_create_u32("kmigrated_sleep_ms", 0644, debugfs_pghot, + &kmigrated_sleep_ms); + debugfs_create_u32("kmigrated_batch_nr", 0644, debugfs_pghot, + &kmigrated_batch_nr); +} diff --git a/mm/pghot.c b/mm/pghot.c new file mode 100644 index 000000000000..95b5012d5b99 --- /dev/null +++ b/mm/pghot.c @@ -0,0 +1,370 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Maintains information about hot pages from slower tier nodes and + * promotes them. + * + * Per-PFN hotness information is stored for lower tier nodes in + * mem_section. + * + * In the default mode, a single byte (u8) is used to store + * the frequency of access and last access time. Promotions are done + * to a default toptier NID. + * + * A kernel thread named kmigrated is provided to migrate or promote + * the hot pages. kmigrated runs for each lower tier node. It iterates + * over the node's PFNs and migrates pages marked for migration into + * their targeted nodes. + */ +#include <linux/mm.h> +#include <linux/migrate.h> +#include <linux/memory-tiers.h> +#include <linux/pghot.h> + +unsigned int pghot_target_nid = PGHOT_DEFAULT_NODE; +unsigned int pghot_src_enabled; +unsigned int pghot_freq_threshold = PGHOT_DEFAULT_FREQ_THRESHOLD; +unsigned int kmigrated_sleep_ms = KMIGRATED_DEFAULT_SLEEP_MS; +unsigned int kmigrated_batch_nr = KMIGRATED_DEFAULT_BATCH_NR; + +unsigned int sysctl_pghot_freq_window = PGHOT_DEFAULT_FREQ_WINDOW; + +DEFINE_STATIC_KEY_FALSE(pghot_src_hwhints); +DEFINE_STATIC_KEY_FALSE(pghot_src_pgtscans); +DEFINE_STATIC_KEY_FALSE(pghot_src_hintfaults); + +#ifdef CONFIG_SYSCTL +static const struct ctl_table pghot_sysctls[] = { + { + .procname = "pghot_promote_freq_window_ms", + .data = &sysctl_pghot_freq_window, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + }, +}; +#endif + +static bool kmigrated_started __ro_after_init; + +/** + * pghot_record_access() - Record page accesses from lower tier memory + * for the purpose of tracking page hotness and subsequent promotion. + * + * @pfn: PFN of the page + * @nid: Unused + * @src: The identifier of the sub-system that reports the access + * @now: Access time in jiffies + * + * Updates the frequency and time of access and marks the page as + * ready for migration if the frequency crosses a threshold. The pages + * marked for migration are migrated by kmigrated kernel thread. + * + * Return: 0 on success and -EINVAL on failure to record the access. + */ +int pghot_record_access(unsigned long pfn, int nid, int src, unsigned long now) +{ + struct mem_section *ms; + struct folio *folio; + phi_t *phi, *hot_map; + struct page *page; + + if (!kmigrated_started) + return -EINVAL; + + if (nid >= PGHOT_NID_MAX) + return -EINVAL; + + switch (src) { + case PGHOT_HW_HINTS: + if (!static_branch_likely(&pghot_src_hwhints)) + return -EINVAL; + count_vm_event(PGHOT_RECORD_HWHINTS); + break; + case PGHOT_PGTABLE_SCAN: + if (!static_branch_likely(&pghot_src_pgtscans)) + return -EINVAL; + count_vm_event(PGHOT_RECORD_PGTSCANS); + break; + case PGHOT_HINT_FAULT: + if (!static_branch_likely(&pghot_src_hintfaults)) + return -EINVAL; + count_vm_event(PGHOT_RECORD_HINTFAULTS); + break; + default: + return -EINVAL; + } + + /* + * Record only accesses from lower tiers. + */ + if (node_is_toptier(pfn_to_nid(pfn))) + return 0; + + /* + * Reject the non-migratable pages right away. + */ + page = pfn_to_online_page(pfn); + if (!page || is_zone_device_page(page)) + return 0; + + folio = page_folio(page); + if (!folio_test_lru(folio)) + return 0; + + /* Get the hotness slot corresponding to the 1st PFN of the folio */ + pfn = folio_pfn(folio); + ms = __pfn_to_section(pfn); + if (!ms || !ms->hot_map) + return -EINVAL; + + hot_map = (phi_t *)(((unsigned long)(ms->hot_map)) & ~PGHOT_SECTION_HOT_MASK); + phi = &hot_map[pfn % PAGES_PER_SECTION]; + + count_vm_event(PGHOT_RECORDED_ACCESSES); + + /* + * Update the hotness parameters. + */ + if (pghot_update_record(phi, nid, now)) { + set_bit(PGHOT_SECTION_HOT_BIT, (unsigned long *)&ms->hot_map); + set_bit(PGDAT_KMIGRATED_ACTIVATE, &page_pgdat(page)->flags); + } + return 0; +} + +static int pghot_get_hotness(unsigned long pfn, int *nid, int *freq, + unsigned long *time) +{ + phi_t *phi, *hot_map; + struct mem_section *ms; + + ms = __pfn_to_section(pfn); + if (!ms || !ms->hot_map) + return -EINVAL; + + hot_map = (phi_t *)(((unsigned long)(ms->hot_map)) & ~PGHOT_SECTION_HOT_MASK); + phi = &hot_map[pfn % PAGES_PER_SECTION]; + + return pghot_get_record(phi, nid, freq, time); +} + +/* + * Walks the PFNs of the zone, isolates and migrates them in batches. + */ +static void kmigrated_walk_zone(unsigned long start_pfn, unsigned long end_pfn, + int src_nid) +{ + int cur_nid = NUMA_NO_NODE; + LIST_HEAD(migrate_list); + int batch_count = 0; + struct folio *folio; + struct page *page; + unsigned long pfn; + + pfn = start_pfn; + do { + int nid = NUMA_NO_NODE, nr = 1; + int freq = 0; + unsigned long time = 0; + + if (!pfn_valid(pfn)) + goto out_next; + + page = pfn_to_online_page(pfn); + if (!page) + goto out_next; + + folio = page_folio(page); + nr = folio_nr_pages(folio); + if (folio_nid(folio) != src_nid) + goto out_next; + + if (!folio_test_lru(folio)) + goto out_next; + + if (pghot_get_hotness(pfn, &nid, &freq, &time)) + goto out_next; + + if (nid == NUMA_NO_NODE) + nid = pghot_target_nid; + + if (folio_nid(folio) == nid) + goto out_next; + + if (migrate_misplaced_folio_prepare(folio, NULL, nid)) + goto out_next; + + if (cur_nid == NUMA_NO_NODE) + cur_nid = nid; + + /* If NID changed, flush the previous batch first */ + if (cur_nid != nid) { + if (!list_empty(&migrate_list)) + migrate_misplaced_folios_batch(&migrate_list, cur_nid); + cur_nid = nid; + batch_count = 0; + cond_resched(); + } + + list_add(&folio->lru, &migrate_list); + + if (++batch_count > kmigrated_batch_nr) { + migrate_misplaced_folios_batch(&migrate_list, cur_nid); + batch_count = 0; + cond_resched(); + } +out_next: + pfn += nr; + } while (pfn < end_pfn); + if (!list_empty(&migrate_list)) + migrate_misplaced_folios_batch(&migrate_list, cur_nid); +} + +static void kmigrated_do_work(pg_data_t *pgdat) +{ + unsigned long section_nr, s_begin, start_pfn; + struct mem_section *ms; + int nid; + + clear_bit(PGDAT_KMIGRATED_ACTIVATE, &pgdat->flags); + /* s_begin = first_present_section_nr(); */ + s_begin = next_present_section_nr(-1); + for_each_present_section_nr(s_begin, section_nr) { + start_pfn = section_nr_to_pfn(section_nr); + ms = __nr_to_section(section_nr); + + if (!pfn_valid(start_pfn)) + continue; + + nid = pfn_to_nid(start_pfn); + if (node_is_toptier(nid) || nid != pgdat->node_id) + continue; + + if (!test_and_clear_bit(PGHOT_SECTION_HOT_BIT, (unsigned long *)&ms->hot_map)) + continue; + + kmigrated_walk_zone(start_pfn, start_pfn + PAGES_PER_SECTION, + pgdat->node_id); + } +} + +static inline bool kmigrated_work_requested(pg_data_t *pgdat) +{ + return test_bit(PGDAT_KMIGRATED_ACTIVATE, &pgdat->flags); +} + +/* + * Per-node kthread that iterates over its PFNs and migrates the + * pages that have been marked for migration. + */ +static int kmigrated(void *p) +{ + long timeout = msecs_to_jiffies(kmigrated_sleep_ms); + pg_data_t *pgdat = p; + + while (!kthread_should_stop()) { + if (wait_event_timeout(pgdat->kmigrated_wait, kmigrated_work_requested(pgdat), + timeout)) + kmigrated_do_work(pgdat); + } + return 0; +} + +static int kmigrated_run(int nid) +{ + pg_data_t *pgdat = NODE_DATA(nid); + int ret; + + if (node_is_toptier(nid)) + return 0; + + if (!pgdat->kmigrated) { + pgdat->kmigrated = kthread_create_on_node(kmigrated, pgdat, nid, + "kmigrated%d", nid); + if (IS_ERR(pgdat->kmigrated)) { + ret = PTR_ERR(pgdat->kmigrated); + pgdat->kmigrated = NULL; + pr_err("Failed to start kmigrated%d, ret %d\n", nid, ret); + return ret; + } + pr_info("pghot: Started kmigrated thread for node %d\n", nid); + } + wake_up_process(pgdat->kmigrated); + return 0; +} + +static void pghot_free_hot_map(void) +{ + unsigned long section_nr, s_begin; + struct mem_section *ms; + + /* s_begin = first_present_section_nr(); */ + s_begin = next_present_section_nr(-1); + for_each_present_section_nr(s_begin, section_nr) { + ms = __nr_to_section(section_nr); + kfree(ms->hot_map); + } +} + +static int pghot_alloc_hot_map(void) +{ + unsigned long section_nr, s_begin, start_pfn; + struct mem_section *ms; + int nid; + + /* s_begin = first_present_section_nr(); */ + s_begin = next_present_section_nr(-1); + for_each_present_section_nr(s_begin, section_nr) { + ms = __nr_to_section(section_nr); + start_pfn = section_nr_to_pfn(section_nr); + nid = pfn_to_nid(start_pfn); + + if (node_is_toptier(nid) || !pfn_valid(start_pfn)) + continue; + + ms->hot_map = kcalloc_node(PAGES_PER_SECTION, PGHOT_RECORD_SIZE, GFP_KERNEL, + nid); + if (!ms->hot_map) + goto out_free_hot_map; + } + return 0; + +out_free_hot_map: + pghot_free_hot_map(); + return -ENOMEM; +} + +static int __init pghot_init(void) +{ + pg_data_t *pgdat; + int nid, ret; + + ret = pghot_alloc_hot_map(); + if (ret) + return ret; + + for_each_node_state(nid, N_MEMORY) { + ret = kmigrated_run(nid); + if (ret) + goto out_stop_kthread; + } + register_sysctl_init("vm", pghot_sysctls); + pghot_debug_init(); + + kmigrated_started = true; + return 0; + +out_stop_kthread: + for_each_node_state(nid, N_MEMORY) { + pgdat = NODE_DATA(nid); + if (pgdat->kmigrated) { + kthread_stop(pgdat->kmigrated); + pgdat->kmigrated = NULL; + } + } + pghot_free_hot_map(); + return ret; +} + +late_initcall_sync(pghot_init) diff --git a/mm/vmstat.c b/mm/vmstat.c index 65de88cdf40e..f6f91b9dd887 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1501,6 +1501,12 @@ const char * const vmstat_text[] = { [I(KSTACK_REST)] = "kstack_rest", #endif #endif +#ifdef CONFIG_PGHOT + [I(PGHOT_RECORDED_ACCESSES)] = "pghot_recorded_accesses", + [I(PGHOT_RECORD_HWHINTS)] = "pghot_recorded_hwhints", + [I(PGHOT_RECORD_PGTSCANS)] = "pghot_recorded_pgtscans", + [I(PGHOT_RECORD_HINTFAULTS)] = "pghot_recorded_hintfaults", +#endif /* CONFIG_PGHOT */ #undef I #endif /* CONFIG_VM_EVENT_COUNTERS */ }; -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:36 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
By default, one byte per PFN is used to store hotness information. Limited number of bits are used to store the access time leading to coarse-grained time tracking. Also there aren't enough bits to track the toptier NID explicitly and hence the default target_nid is used for promotion. This precise mode relaxes the above situation by storing the hotness information in 4 bytes per PFN. More fine-grained access time tracking and toptier NID tracking becomes possible in this mode. Typically useful when toptier consists of more than one node. Signed-off-by: Bharata B Rao <bharata@amd.com> --- Documentation/admin-guide/mm/pghot.txt | 4 +- include/linux/mmzone.h | 2 +- include/linux/pghot.h | 31 ++++++++++++ mm/Kconfig | 11 ++++ mm/Makefile | 7 ++- mm/pghot-precise.c | 70 ++++++++++++++++++++++++++ mm/pghot.c | 13 +++-- 7 files changed, 130 insertions(+), 8 deletions(-) create mode 100644 mm/pghot-precise.c diff --git a/Documentation/admin-guide/mm/pghot.txt b/Documentation/admin-guide/mm/pghot.txt index 01291b72e7ab..b329e692ef89 100644 --- a/Documentation/admin-guide/mm/pghot.txt +++ b/Documentation/admin-guide/mm/pghot.txt @@ -38,7 +38,7 @@ Path: /sys/kernel/debug/pghot/ 3. **freq_threshold** - Minimum access frequency before a page is marked ready for promotion. - - Range: 1 to 3 + - Range: 1 to 3 in default mode, 1 to 7 in precision mode. - Default: 2 - Example: # echo 3 > /sys/kernel/debug/pghot/freq_threshold @@ -60,7 +60,7 @@ Path: /proc/sys/vm/pghot_promote_freq_window_ms - Controls the time window (in ms) for counting access frequency. A page is considered hot only when **freq_threshold** number of accesses occur with this time period. -- Default: 4000 (4 seconds) +- Default: 4000 (4 seconds) in default mode and 5000 (5s) in precision mode. - Example: # sysctl vm.pghot_promote_freq_window_ms=3000 diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 22e08befb096..49c374064fc2 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1924,7 +1924,7 @@ struct mem_section { #ifdef CONFIG_PGHOT /* * Per-PFN hotness data for this section. - * Array of phi_t (u8 in default mode). + * Array of phi_t (u8 in default mode, u32 in precision mode). * LSB is used as PGHOT_SECTION_HOT_BIT flag. */ void *hot_map; diff --git a/include/linux/pghot.h b/include/linux/pghot.h index 88e57aab697b..d3d59b0c0cf6 100644 --- a/include/linux/pghot.h +++ b/include/linux/pghot.h @@ -48,6 +48,36 @@ enum pghot_src_enabled { #define PGHOT_DEFAULT_NODE 0 +#if defined(CONFIG_PGHOT_PRECISE) +#define PGHOT_DEFAULT_FREQ_WINDOW (5 * MSEC_PER_SEC) + +/* + * Bits 0-26 are used to store nid, frequency and time. + * Bits 27-30 are unused now. + * Bit 31 is used to indicate the page is ready for migration. + */ +#define PGHOT_MIGRATE_READY 31 + +#define PGHOT_NID_WIDTH 10 +#define PGHOT_FREQ_WIDTH 3 +/* time is stored in 14 bits which can represent up to 16s with HZ=1000 */ +#define PGHOT_TIME_WIDTH 14 + +#define PGHOT_NID_SHIFT 0 +#define PGHOT_FREQ_SHIFT (PGHOT_NID_SHIFT + PGHOT_NID_WIDTH) +#define PGHOT_TIME_SHIFT (PGHOT_FREQ_SHIFT + PGHOT_FREQ_WIDTH) + +#define PGHOT_NID_MASK GENMASK(PGHOT_NID_WIDTH - 1, 0) +#define PGHOT_FREQ_MASK GENMASK(PGHOT_FREQ_WIDTH - 1, 0) +#define PGHOT_TIME_MASK GENMASK(PGHOT_TIME_WIDTH - 1, 0) + +#define PGHOT_NID_MAX ((1 << PGHOT_NID_WIDTH) - 1) +#define PGHOT_FREQ_MAX ((1 << PGHOT_FREQ_WIDTH) - 1) +#define PGHOT_TIME_MAX ((1 << PGHOT_TIME_WIDTH) - 1) + +typedef u32 phi_t; + +#else /* !CONFIG_PGHOT_PRECISE */ #define PGHOT_DEFAULT_FREQ_WINDOW (4 * MSEC_PER_SEC) /* @@ -74,6 +104,7 @@ enum pghot_src_enabled { #define PGHOT_TIME_MAX ((1 << PGHOT_TIME_WIDTH) - 1) typedef u8 phi_t; +#endif /* CONFIG_PGHOT_PRECISE */ #define PGHOT_RECORD_SIZE sizeof(phi_t) diff --git a/mm/Kconfig b/mm/Kconfig index f4f0147faac5..fde5aee3e16f 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1478,6 +1478,17 @@ config PGHOT This adds 1 byte of metadata overhead per page in lower-tier memory nodes. +config PGHOT_PRECISE + bool "Hot page tracking precision mode" + def_bool n + depends on PGHOT + help + Enables precision mode for tracking hot pages with pghot sub-system. + Adds fine-grained access time tracking and explicit toptier target + NID tracking. Precise hot page tracking comes at the cost of using + 4 bytes per page against the default one byte per page. Preferable + to enable this on systems with multiple nodes in toptier. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 655a27f3a215..89f999647752 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -147,4 +147,9 @@ obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o obj-$(CONFIG_EXECMEM) += execmem.o obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o -obj-$(CONFIG_PGHOT) += pghot.o pghot-tunables.o pghot-default.o +obj-$(CONFIG_PGHOT) += pghot.o pghot-tunables.o +ifdef CONFIG_PGHOT_PRECISE +obj-$(CONFIG_PGHOT) += pghot-precise.o +else +obj-$(CONFIG_PGHOT) += pghot-default.o +endif diff --git a/mm/pghot-precise.c b/mm/pghot-precise.c new file mode 100644 index 000000000000..d8d4f15b3f9f --- /dev/null +++ b/mm/pghot-precise.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * pghot: Precision mode + * + * 4 byte hotness record per PFN (u32) + * NID, time and frequency tracked as part of the record. + */ + +#include <linux/pghot.h> +#include <linux/jiffies.h> + +unsigned long pghot_access_latency(unsigned long old_time, unsigned long time) +{ + return jiffies_to_msecs((time - old_time) & PGHOT_TIME_MASK); +} + +bool pghot_update_record(phi_t *phi, int nid, unsigned long now) +{ + phi_t freq, old_freq, hotness, old_hotness, old_time, old_nid; + phi_t time = now & PGHOT_TIME_MASK; + + old_hotness = READ_ONCE(*phi); + do { + bool new_window = false; + + hotness = old_hotness; + old_nid = (hotness >> PGHOT_NID_SHIFT) & PGHOT_NID_MASK; + old_freq = (hotness >> PGHOT_FREQ_SHIFT) & PGHOT_FREQ_MASK; + old_time = (hotness >> PGHOT_TIME_SHIFT) & PGHOT_TIME_MASK; + + if (pghot_access_latency(old_time, time) > sysctl_pghot_freq_window) + new_window = true; + + if (new_window) + freq = 1; + else if (old_freq < PGHOT_FREQ_MAX) + freq = old_freq + 1; + else + freq = old_freq; + nid = (nid == NUMA_NO_NODE) ? pghot_target_nid : nid; + + hotness &= ~(PGHOT_NID_MASK << PGHOT_NID_SHIFT); + hotness &= ~(PGHOT_FREQ_MASK << PGHOT_FREQ_SHIFT); + hotness &= ~(PGHOT_TIME_MASK << PGHOT_TIME_SHIFT); + + hotness |= (nid & PGHOT_NID_MASK) << PGHOT_NID_SHIFT; + hotness |= (freq & PGHOT_FREQ_MASK) << PGHOT_FREQ_SHIFT; + hotness |= (time & PGHOT_TIME_MASK) << PGHOT_TIME_SHIFT; + + if (freq >= pghot_freq_threshold) + hotness |= BIT(PGHOT_MIGRATE_READY); + } while (unlikely(!try_cmpxchg(phi, &old_hotness, hotness))); + return !!(hotness & BIT(PGHOT_MIGRATE_READY)); +} + +int pghot_get_record(phi_t *phi, int *nid, int *freq, unsigned long *time) +{ + phi_t old_hotness, hotness = 0; + + old_hotness = READ_ONCE(*phi); + do { + if (!(old_hotness & BIT(PGHOT_MIGRATE_READY))) + return -EINVAL; + } while (unlikely(!try_cmpxchg(phi, &old_hotness, hotness))); + + *nid = (old_hotness >> PGHOT_NID_SHIFT) & PGHOT_NID_MASK; + *freq = (old_hotness >> PGHOT_FREQ_SHIFT) & PGHOT_FREQ_MASK; + *time = (old_hotness >> PGHOT_TIME_SHIFT) & PGHOT_TIME_MASK; + return 0; +} diff --git a/mm/pghot.c b/mm/pghot.c index 95b5012d5b99..bf1d9029cbaa 100644 --- a/mm/pghot.c +++ b/mm/pghot.c @@ -10,6 +10,9 @@ * the frequency of access and last access time. Promotions are done * to a default toptier NID. * + * In the precision mode, 4 bytes are used to store the frequency + * of access, last access time and the accessing NID. + * * A kernel thread named kmigrated is provided to migrate or promote * the hot pages. kmigrated runs for each lower tier node. It iterates * over the node's PFNs and migrates pages marked for migration into @@ -52,13 +55,15 @@ static bool kmigrated_started __ro_after_init; * for the purpose of tracking page hotness and subsequent promotion. * * @pfn: PFN of the page - * @nid: Unused + * @nid: Target NID to where the page needs to be migrated in precision + * mode but unused in default mode * @src: The identifier of the sub-system that reports the access * @now: Access time in jiffies * - * Updates the frequency and time of access and marks the page as - * ready for migration if the frequency crosses a threshold. The pages - * marked for migration are migrated by kmigrated kernel thread. + * Updates the NID (in precision mode only), frequency and time of access + * and marks the page as ready for migration if the frequency crosses a + * threshold. The pages marked for migration are migrated by kmigrated + * kernel thread. * * Return: 0 on success and -EINVAL on failure to record the access. */ -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:37 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Currently hot page promotion (NUMA_BALANCING_MEMORY_TIERING mode of NUMA Balancing) does hot page detection (via hint faults), hot page classification and eventual promotion, all by itself and sits within the scheduler. With pghot, the new hot page tracking and promotion mechanism being available, NUMA Balancing can limit itself to detection of hot pages (via hint faults) and off-load rest of the functionality to the common hot page tracking system. pghot_record_access(PGHOT_HINT_FAULT) API is used to feed the hot page info to pghot. In addition, the migration rate limiting and dynamic threshold logic are moved to kmigrated so that the same can be used for hot pages reported by other sources too. Signed-off-by: Bharata B Rao <bharata@amd.com> --- kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 ++----------------------------------------- mm/huge_memory.c | 26 ++------ mm/memory.c | 31 ++------- mm/pghot.c | 124 +++++++++++++++++++++++++++++++++++ 5 files changed, 141 insertions(+), 193 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 41caa22e0680..02931902a9c6 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -520,7 +520,6 @@ static __init int sched_init_debug(void) debugfs_create_u32("scan_period_min_ms", 0644, numa, &sysctl_numa_balancing_scan_period_min); debugfs_create_u32("scan_period_max_ms", 0644, numa, &sysctl_numa_balancing_scan_period_max); debugfs_create_u32("scan_size_mb", 0644, numa, &sysctl_numa_balancing_scan_size); - debugfs_create_u32("hot_threshold_ms", 0644, numa, &sysctl_numa_balancing_hot_threshold); #endif /* CONFIG_NUMA_BALANCING */ debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index da46c3164537..4e70f58fbbfa 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -125,11 +125,6 @@ int __weak arch_asym_cpu_priority(int cpu) static unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL; #endif -#ifdef CONFIG_NUMA_BALANCING -/* Restrict the NUMA promotion throughput (MB/s) for each target node. */ -static unsigned int sysctl_numa_balancing_promote_rate_limit = 65536; -#endif - #ifdef CONFIG_SYSCTL static const struct ctl_table sched_fair_sysctls[] = { #ifdef CONFIG_CFS_BANDWIDTH @@ -142,16 +137,6 @@ static const struct ctl_table sched_fair_sysctls[] = { .extra1 = SYSCTL_ONE, }, #endif -#ifdef CONFIG_NUMA_BALANCING - { - .procname = "numa_balancing_promote_rate_limit_MBps", - .data = &sysctl_numa_balancing_promote_rate_limit, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = proc_dointvec_minmax, - .extra1 = SYSCTL_ZERO, - }, -#endif /* CONFIG_NUMA_BALANCING */ }; static int __init sched_fair_sysctl_init(void) @@ -1427,9 +1412,6 @@ unsigned int sysctl_numa_balancing_scan_size = 256; /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */ unsigned int sysctl_numa_balancing_scan_delay = 1000; -/* The page with hint page fault latency < threshold in ms is considered hot */ -unsigned int sysctl_numa_balancing_hot_threshold = MSEC_PER_SEC; - struct numa_group { refcount_t refcount; @@ -1784,108 +1766,6 @@ static inline bool cpupid_valid(int cpupid) return cpupid_to_cpu(cpupid) < nr_cpu_ids; } -/* - * For memory tiering mode, if there are enough free pages (more than - * enough watermark defined here) in fast memory node, to take full - * advantage of fast memory capacity, all recently accessed slow - * memory pages will be migrated to fast memory node without - * considering hot threshold. - */ -static bool pgdat_free_space_enough(struct pglist_data *pgdat) -{ - int z; - unsigned long enough_wmark; - - enough_wmark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT, - pgdat->node_present_pages >> 4); - for (z = pgdat->nr_zones - 1; z >= 0; z--) { - struct zone *zone = pgdat->node_zones + z; - - if (!populated_zone(zone)) - continue; - - if (zone_watermark_ok(zone, 0, - promo_wmark_pages(zone) + enough_wmark, - ZONE_MOVABLE, 0)) - return true; - } - return false; -} - -/* - * For memory tiering mode, when page tables are scanned, the scan - * time will be recorded in struct page in addition to make page - * PROT_NONE for slow memory page. So when the page is accessed, in - * hint page fault handler, the hint page fault latency is calculated - * via, - * - * hint page fault latency = hint page fault time - scan time - * - * The smaller the hint page fault latency, the higher the possibility - * for the page to be hot. - */ -static int numa_hint_fault_latency(struct folio *folio) -{ - int last_time, time; - - time = jiffies_to_msecs(jiffies); - last_time = folio_xchg_access_time(folio, time); - - return (time - last_time) & PAGE_ACCESS_TIME_MASK; -} - -/* - * For memory tiering mode, too high promotion/demotion throughput may - * hurt application latency. So we provide a mechanism to rate limit - * the number of pages that are tried to be promoted. - */ -static bool numa_promotion_rate_limit(struct pglist_data *pgdat, - unsigned long rate_limit, int nr) -{ - unsigned long nr_cand; - unsigned int now, start; - - now = jiffies_to_msecs(jiffies); - mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr); - nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); - start = pgdat->nbp_rl_start; - if (now - start > MSEC_PER_SEC && - cmpxchg(&pgdat->nbp_rl_start, start, now) == start) - pgdat->nbp_rl_nr_cand = nr_cand; - if (nr_cand - pgdat->nbp_rl_nr_cand >= rate_limit) - return true; - return false; -} - -#define NUMA_MIGRATION_ADJUST_STEPS 16 - -static void numa_promotion_adjust_threshold(struct pglist_data *pgdat, - unsigned long rate_limit, - unsigned int ref_th) -{ - unsigned int now, start, th_period, unit_th, th; - unsigned long nr_cand, ref_cand, diff_cand; - - now = jiffies_to_msecs(jiffies); - th_period = sysctl_numa_balancing_scan_period_max; - start = pgdat->nbp_th_start; - if (now - start > th_period && - cmpxchg(&pgdat->nbp_th_start, start, now) == start) { - ref_cand = rate_limit * - sysctl_numa_balancing_scan_period_max / MSEC_PER_SEC; - nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); - diff_cand = nr_cand - pgdat->nbp_th_nr_cand; - unit_th = ref_th * 2 / NUMA_MIGRATION_ADJUST_STEPS; - th = pgdat->nbp_threshold ? : ref_th; - if (diff_cand > ref_cand * 11 / 10) - th = max(th - unit_th, unit_th); - else if (diff_cand < ref_cand * 9 / 10) - th = min(th + unit_th, ref_th * 2); - pgdat->nbp_th_nr_cand = nr_cand; - pgdat->nbp_threshold = th; - } -} - bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio, int src_nid, int dst_cpu) { @@ -1901,33 +1781,11 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio, /* * The pages in slow memory node should be migrated according - * to hot/cold instead of private/shared. - */ - if (folio_use_access_time(folio)) { - struct pglist_data *pgdat; - unsigned long rate_limit; - unsigned int latency, th, def_th; - long nr = folio_nr_pages(folio); - - pgdat = NODE_DATA(dst_nid); - if (pgdat_free_space_enough(pgdat)) { - /* workload changed, reset hot threshold */ - pgdat->nbp_threshold = 0; - mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE_NRL, nr); - return true; - } - - def_th = sysctl_numa_balancing_hot_threshold; - rate_limit = MB_TO_PAGES(sysctl_numa_balancing_promote_rate_limit); - numa_promotion_adjust_threshold(pgdat, rate_limit, def_th); - - th = pgdat->nbp_threshold ? : def_th; - latency = numa_hint_fault_latency(folio); - if (latency >= th) - return false; - - return !numa_promotion_rate_limit(pgdat, rate_limit, nr); - } + * to hot/cold instead of private/shared. Also the migration + * of such pages are handled by kmigrated. + */ + if (folio_use_access_time(folio)) + return true; this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid); last_cpupid = folio_xchg_last_cpupid(folio, this_cpupid); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 40cf59301c21..f52587e70b3c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -40,6 +40,7 @@ #include <linux/pgalloc.h> #include <linux/pgalloc_tag.h> #include <linux/pagewalk.h> +#include <linux/pghot.h> #include <asm/tlb.h> #include "internal.h" @@ -2217,29 +2218,12 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) target_nid = numa_migrate_check(folio, vmf, haddr, &flags, writable, &last_cpupid); + nid = target_nid; if (target_nid == NUMA_NO_NODE) goto out_map; - if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { - flags |= TNF_MIGRATE_FAIL; - goto out_map; - } - /* The folio is isolated and isolation code holds a folio reference. */ - spin_unlock(vmf->ptl); - writable = false; - if (!migrate_misplaced_folio(folio, target_nid)) { - flags |= TNF_MIGRATED; - nid = target_nid; - task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); - return 0; - } + writable = false; - flags |= TNF_MIGRATE_FAIL; - vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd))) { - spin_unlock(vmf->ptl); - return 0; - } out_map: /* Restore the PMD */ pmd = pmd_modify(pmdp_get(vmf->pmd), vma->vm_page_prot); @@ -2250,8 +2234,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); spin_unlock(vmf->ptl); - if (nid != NUMA_NO_NODE) + if (nid != NUMA_NO_NODE) { + pghot_record_access(folio_pfn(folio), nid, PGHOT_HINT_FAULT, jiffies); task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); + } return 0; } diff --git a/mm/memory.c b/mm/memory.c index 2a55edc48a65..98a9a3b675a0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -75,6 +75,7 @@ #include <linux/perf_event.h> #include <linux/ptrace.h> #include <linux/vmalloc.h> +#include <linux/pghot.h> #include <linux/sched/sysctl.h> #include <linux/pgalloc.h> #include <linux/uaccess.h> @@ -6046,34 +6047,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) target_nid = numa_migrate_check(folio, vmf, vmf->address, &flags, writable, &last_cpupid); + nid = target_nid; if (target_nid == NUMA_NO_NODE) goto out_map; - if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { - flags |= TNF_MIGRATE_FAIL; - goto out_map; - } - /* The folio is isolated and isolation code holds a folio reference. */ - pte_unmap_unlock(vmf->pte, vmf->ptl); + writable = false; ignore_writable = true; - - /* Migrate to the requested node */ - if (!migrate_misplaced_folio(folio, target_nid)) { - nid = target_nid; - flags |= TNF_MIGRATED; - task_numa_fault(last_cpupid, nid, nr_pages, flags); - return 0; - } - - flags |= TNF_MIGRATE_FAIL; - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (unlikely(!vmf->pte)) - return 0; - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { - pte_unmap_unlock(vmf->pte, vmf->ptl); - return 0; - } out_map: /* * Make it present again, depending on how arch implements @@ -6087,8 +6066,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) writable); pte_unmap_unlock(vmf->pte, vmf->ptl); - if (nid != NUMA_NO_NODE) + if (nid != NUMA_NO_NODE) { + pghot_record_access(folio_pfn(folio), nid, PGHOT_HINT_FAULT, jiffies); task_numa_fault(last_cpupid, nid, nr_pages, flags); + } return 0; } diff --git a/mm/pghot.c b/mm/pghot.c index bf1d9029cbaa..6fc76c1eaff8 100644 --- a/mm/pghot.c +++ b/mm/pghot.c @@ -17,6 +17,9 @@ * the hot pages. kmigrated runs for each lower tier node. It iterates * over the node's PFNs and migrates pages marked for migration into * their targeted nodes. + * + * Migration rate-limiting and dynamic threshold logic implementations + * were moved from NUMA Balancing mode 2. */ #include <linux/mm.h> #include <linux/migrate.h> @@ -31,6 +34,12 @@ unsigned int kmigrated_batch_nr = KMIGRATED_DEFAULT_BATCH_NR; unsigned int sysctl_pghot_freq_window = PGHOT_DEFAULT_FREQ_WINDOW; +/* Restrict the NUMA promotion throughput (MB/s) for each target node. */ +static unsigned int sysctl_pghot_promote_rate_limit = 65536; + +#define KMIGRATED_MIGRATION_ADJUST_STEPS 16 +#define KMIGRATED_PROMOTION_THRESHOLD_WINDOW 60000 + DEFINE_STATIC_KEY_FALSE(pghot_src_hwhints); DEFINE_STATIC_KEY_FALSE(pghot_src_pgtscans); DEFINE_STATIC_KEY_FALSE(pghot_src_hintfaults); @@ -45,6 +54,14 @@ static const struct ctl_table pghot_sysctls[] = { .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, }, + { + .procname = "pghot_promote_rate_limit_MBps", + .data = &sysctl_pghot_promote_rate_limit, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + }, }; #endif @@ -138,6 +155,110 @@ int pghot_record_access(unsigned long pfn, int nid, int src, unsigned long now) return 0; } +/* + * For memory tiering mode, if there are enough free pages (more than + * enough watermark defined here) in fast memory node, to take full + * advantage of fast memory capacity, all recently accessed slow + * memory pages will be migrated to fast memory node without + * considering hot threshold. + */ +static bool pgdat_free_space_enough(struct pglist_data *pgdat) +{ + int z; + unsigned long enough_wmark; + + enough_wmark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT, + pgdat->node_present_pages >> 4); + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + struct zone *zone = pgdat->node_zones + z; + + if (!populated_zone(zone)) + continue; + + if (zone_watermark_ok(zone, 0, + promo_wmark_pages(zone) + enough_wmark, + ZONE_MOVABLE, 0)) + return true; + } + return false; +} + +/* + * For memory tiering mode, too high promotion/demotion throughput may + * hurt application latency. So we provide a mechanism to rate limit + * the number of pages that are tried to be promoted. + */ +static bool kmigrated_promotion_rate_limit(struct pglist_data *pgdat, unsigned long rate_limit, + int nr, unsigned long now_ms) +{ + unsigned long nr_cand; + unsigned int start; + + mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr); + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + start = pgdat->nbp_rl_start; + if (now_ms - start > MSEC_PER_SEC && + cmpxchg(&pgdat->nbp_rl_start, start, now_ms) == start) + pgdat->nbp_rl_nr_cand = nr_cand; + if (nr_cand - pgdat->nbp_rl_nr_cand >= rate_limit) + return true; + return false; +} + +static void kmigrated_promotion_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, unsigned int ref_th, + unsigned long now_ms) +{ + unsigned int start, th_period, unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + th_period = KMIGRATED_PROMOTION_THRESHOLD_WINDOW; + start = pgdat->nbp_th_start; + if (now_ms - start > th_period && + cmpxchg(&pgdat->nbp_th_start, start, now_ms) == start) { + ref_cand = rate_limit * + KMIGRATED_PROMOTION_THRESHOLD_WINDOW / MSEC_PER_SEC; + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + diff_cand = nr_cand - pgdat->nbp_th_nr_cand; + unit_th = ref_th * 2 / KMIGRATED_MIGRATION_ADJUST_STEPS; + th = pgdat->nbp_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th * 2); + pgdat->nbp_th_nr_cand = nr_cand; + pgdat->nbp_threshold = th; + } +} + +static bool kmigrated_should_migrate_memory(unsigned long nr_pages, int nid, + unsigned long time) +{ + struct pglist_data *pgdat; + unsigned long rate_limit; + unsigned int th, def_th; + unsigned long now_ms = jiffies_to_msecs(jiffies); /* Based on full-width jiffies */ + unsigned long now = jiffies; + + pgdat = NODE_DATA(nid); + if (pgdat_free_space_enough(pgdat)) { + /* workload changed, reset hot threshold */ + pgdat->nbp_threshold = 0; + mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE_NRL, nr_pages); + return true; + } + + def_th = sysctl_pghot_freq_window; + rate_limit = MB_TO_PAGES(sysctl_pghot_promote_rate_limit); + kmigrated_promotion_adjust_threshold(pgdat, rate_limit, def_th, now_ms); + + th = pgdat->nbp_threshold ? : def_th; + if (pghot_access_latency(time, now) >= th) + return false; + + return !kmigrated_promotion_rate_limit(pgdat, rate_limit, nr_pages, now_ms); +} + static int pghot_get_hotness(unsigned long pfn, int *nid, int *freq, unsigned long *time) { @@ -197,6 +318,9 @@ static void kmigrated_walk_zone(unsigned long start_pfn, unsigned long end_pfn, if (folio_nid(folio) == nid) goto out_next; + if (!kmigrated_should_migrate_memory(nr, nid, time)) + goto out_next; + if (migrate_misplaced_folio_prepare(folio, NULL, nid)) goto out_next; -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:38 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Use IBS (Instruction Based Sampling) feature present in AMD processors for memory access tracking. The access information obtained from IBS via NMI is fed to pghot sub-system for futher action. In addition to many other information related to the memory access, IBS provides physical (and virtual) address of the access and indicates if the access came from slower tier. Only memory accesses originating from slower tiers are further acted upon by this driver. The samples are initially accumulated in percpu buffers which are flushed to pghot hot page tracking mechanism using irq_work. TODO: Many counters are added to vmstat just as debugging aid for now. About IBS --------- IBS can be programmed to provide data about instruction execution periodically. This is done by programming a desired sample count (number of ops) in a control register. When the programmed number of ops are dispatched, a micro-op gets tagged, various information about the tagged micro-op's execution is populated in IBS execution MSRs and an interrupt is raised. While IBS provides a lot of data for each sample, for the purpose of memory access profiling, we are interested in linear and physical address of the memory access that reached DRAM. Recent AMD processors provide further filtering where it is possible to limit the sampling to those ops that had an L3 miss which greately reduces the non-useful samples. While IBS provides capability to sample instruction fetch and execution, only IBS execution sampling is used here to collect data about memory accesses that occur during the instruction execution. More information about IBS is available in Sec 13.3 of AMD64 Architecture Programmer's Manual, Volume 2:System Programming which is present at: https://bugzilla.kernel.org/attachment.cgi?id=288923 Information about MSRs used for programming IBS can be found in Sec 2.1.14.4 of PPR Vol 1 for AMD Family 19h Model 11h B1 which is currently present at: https://www.amd.com/system/files/TechDocs/55901_0.25.zip Signed-off-by: Bharata B Rao <bharata@amd.com> --- arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/msr-index.h | 16 ++ arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 317 +++++++++++++++++++++++++++++++ include/linux/pghot.h | 8 + include/linux/vm_event_item.h | 19 ++ mm/Kconfig | 13 ++ mm/vmstat.c | 19 ++ 8 files changed, 403 insertions(+) create mode 100644 arch/x86/mm/ibs.c diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index aca89f23d2e0..dc544d084c17 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -13,6 +13,7 @@ #include <linux/ptrace.h> #include <linux/syscore_ops.h> #include <linux/sched/clock.h> +#include <linux/pghot.h> #include <asm/apic.h> #include <asm/msr.h> @@ -1760,6 +1761,15 @@ static __init int amd_ibs_init(void) { u32 caps; + /* + * TODO: Find a clean way to disable perf IBS so that IBS + * can be used for memory access profiling. + */ + if (hwmem_access_profiler_inuse()) { + pr_info("IBS isn't available for perf use\n"); + return 0; + } + caps = __get_ibs_caps(); if (!caps) return -ENODEV; /* ibs not supported by the cpu */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 3d0a0950d20a..3c5d69ec83a2 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -784,6 +784,22 @@ /* AMD Last Branch Record MSRs */ #define MSR_AMD64_LBR_SELECT 0xc000010e +/* AMD IBS MSR bits */ +#define MSR_AMD64_IBSOPDATA2_DATASRC 0x7 +#define MSR_AMD64_IBSOPDATA2_DATASRC_LCL_CACHE 0x1 +#define MSR_AMD64_IBSOPDATA2_DATASRC_PEER_CACHE_NEAR 0x2 +#define MSR_AMD64_IBSOPDATA2_DATASRC_DRAM 0x3 +#define MSR_AMD64_IBSOPDATA2_DATASRC_FAR_CCX_CACHE 0x5 +#define MSR_AMD64_IBSOPDATA2_DATASRC_EXT_MEM 0x8 +#define MSR_AMD64_IBSOPDATA2_RMTNODE 0x10 + +#define MSR_AMD64_IBSOPDATA3_LDOP BIT_ULL(0) +#define MSR_AMD64_IBSOPDATA3_STOP BIT_ULL(1) +#define MSR_AMD64_IBSOPDATA3_DCMISS BIT_ULL(7) +#define MSR_AMD64_IBSOPDATA3_LADDR_VALID BIT_ULL(17) +#define MSR_AMD64_IBSOPDATA3_PADDR_VALID BIT_ULL(18) +#define MSR_AMD64_IBSOPDATA3_L2MISS BIT_ULL(20) + /* Zen4 */ #define MSR_ZEN4_BP_CFG 0xc001102e #define MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT 4 diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 5b9908f13dcf..361a456582e9 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -57,3 +57,4 @@ obj-$(CONFIG_X86_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o +obj-$(CONFIG_HWMEM_PROFILER) += ibs.o diff --git a/arch/x86/mm/ibs.c b/arch/x86/mm/ibs.c new file mode 100644 index 000000000000..752f688375f9 --- /dev/null +++ b/arch/x86/mm/ibs.c @@ -0,0 +1,317 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <linux/init.h> +#include <linux/pghot.h> +#include <linux/percpu.h> +#include <linux/workqueue.h> +#include <linux/irq_work.h> + +#include <asm/nmi.h> +#include <asm/perf_event.h> /* TODO: Move defns like IBS_OP_ENABLE into non-perf header */ +#include <asm/apic.h> + +bool hwmem_access_profiling; + +static u64 ibs_config __read_mostly; +static u32 ibs_caps; + +#define IBS_NR_SAMPLES 150 + +/* + * Basic access info captured for each memory access. + */ +struct ibs_sample { + unsigned long pfn; + unsigned long time; /* jiffies when accessed */ + int nid; /* Accessing node ID, if known */ +}; + +/* + * Percpu buffer of access samples. Samples are accumulated here + * before pushing them to pghot sub-system for further action. + */ +struct ibs_sample_pcpu { + struct ibs_sample samples[IBS_NR_SAMPLES]; + int head, tail; +}; + +struct ibs_sample_pcpu __percpu *ibs_s; + +/* + * The workqueue for pushing the percpu access samples to pghot sub-system. + */ +static struct work_struct ibs_work; +static struct irq_work ibs_irq_work; + +bool hwmem_access_profiler_inuse(void) +{ + return hwmem_access_profiling; +} + +/* + * Record the IBS-reported access sample in percpu buffer. + * Called from IBS NMI handler. + */ +static int ibs_push_sample(unsigned long pfn, int nid, unsigned long time) +{ + struct ibs_sample_pcpu *ibs_pcpu = raw_cpu_ptr(ibs_s); + int next = ibs_pcpu->head + 1; + + if (next >= IBS_NR_SAMPLES) + next = 0; + + if (next == ibs_pcpu->tail) + return 0; + + ibs_pcpu->samples[ibs_pcpu->head].pfn = pfn; + ibs_pcpu->samples[ibs_pcpu->head].time = time; + ibs_pcpu->samples[ibs_pcpu->head].nid = nid; + ibs_pcpu->head = next; + return 1; +} + +static int ibs_pop_sample(struct ibs_sample *s) +{ + struct ibs_sample_pcpu *ibs_pcpu = raw_cpu_ptr(ibs_s); + + int next = ibs_pcpu->tail + 1; + + if (ibs_pcpu->head == ibs_pcpu->tail) + return 0; + + if (next >= IBS_NR_SAMPLES) + next = 0; + + *s = ibs_pcpu->samples[ibs_pcpu->tail]; + ibs_pcpu->tail = next; + return 1; +} + +/* + * Remove access samples from percpu buffer and send them + * to pghot sub-system for further action. + */ +static void ibs_work_handler(struct work_struct *work) +{ + struct ibs_sample s; + + while (ibs_pop_sample(&s)) + pghot_record_access(s.pfn, s.nid, PGHOT_HW_HINTS, s.time); +} + +static void ibs_irq_handler(struct irq_work *i) +{ + schedule_work_on(smp_processor_id(), &ibs_work); +} + +/* + * IBS NMI handler: Process the memory access info reported by IBS. + * + * Reads the MSRs to collect all the information about the reported + * memory access, validates the access, stores the valid sample and + * schedules the work on this CPU to further process the sample. + */ +static int ibs_overflow_handler(unsigned int cmd, struct pt_regs *regs) +{ + struct mm_struct *mm = current->mm; + u64 ops_ctl, ops_data3, ops_data2; + u64 laddr = -1, paddr = -1; + u64 data_src, rmt_node; + struct page *page; + unsigned long pfn; + + rdmsrl(MSR_AMD64_IBSOPCTL, ops_ctl); + + /* + * When IBS sampling period is reprogrammed via read-modify-update + * of MSR_AMD64_IBSOPCTL, overflow NMIs could be generated with + * IBS_OP_ENABLE not set. For such cases, return as HANDLED. + * + * With this, the handler will say "handled" for all NMIs that + * aren't related to this NMI. This stems from the limitation of + * having both status and control bits in one MSR. + */ + if (!(ops_ctl & IBS_OP_VAL)) + goto handled; + + wrmsrl(MSR_AMD64_IBSOPCTL, ops_ctl & ~IBS_OP_VAL); + + count_vm_event(HWHINT_NR_EVENTS); + + if (!user_mode(regs)) { + count_vm_event(HWHINT_KERNEL); + goto handled; + } + + if (!mm) { + count_vm_event(HWHINT_KTHREAD); + goto handled; + } + + rdmsrl(MSR_AMD64_IBSOPDATA3, ops_data3); + + /* Load/Store ops only */ + /* TODO: DataSrc isn't valid for stores, so filter out stores? */ + if (!(ops_data3 & (MSR_AMD64_IBSOPDATA3_LDOP | + MSR_AMD64_IBSOPDATA3_STOP))) { + count_vm_event(HWHINT_NON_LOAD_STORES); + goto handled; + } + + /* Discard the sample if it was L1 or L2 hit */ + if (!(ops_data3 & (MSR_AMD64_IBSOPDATA3_DCMISS | + MSR_AMD64_IBSOPDATA3_L2MISS))) { + count_vm_event(HWHINT_DC_L2_HITS); + goto handled; + } + + rdmsrl(MSR_AMD64_IBSOPDATA2, ops_data2); + data_src = ops_data2 & MSR_AMD64_IBSOPDATA2_DATASRC; + if (ibs_caps & IBS_CAPS_ZEN4) + data_src |= ((ops_data2 & 0xC0) >> 3); + + switch (data_src) { + case MSR_AMD64_IBSOPDATA2_DATASRC_LCL_CACHE: + count_vm_event(HWHINT_LOCAL_L3L1L2); + break; + case MSR_AMD64_IBSOPDATA2_DATASRC_PEER_CACHE_NEAR: + count_vm_event(HWHINT_LOCAL_PEER_CACHE_NEAR); + break; + case MSR_AMD64_IBSOPDATA2_DATASRC_DRAM: + count_vm_event(HWHINT_DRAM_ACCESSES); + break; + case MSR_AMD64_IBSOPDATA2_DATASRC_EXT_MEM: + count_vm_event(HWHINT_CXL_ACCESSES); + break; + case MSR_AMD64_IBSOPDATA2_DATASRC_FAR_CCX_CACHE: + count_vm_event(HWHINT_FAR_CACHE_HITS); + break; + } + + rmt_node = ops_data2 & MSR_AMD64_IBSOPDATA2_RMTNODE; + if (rmt_node) + count_vm_event(HWHINT_REMOTE_NODE); + + /* Is linear addr valid? */ + if (ops_data3 & MSR_AMD64_IBSOPDATA3_LADDR_VALID) + rdmsrl(MSR_AMD64_IBSDCLINAD, laddr); + else { + count_vm_event(HWHINT_LADDR_INVALID); + goto handled; + } + + /* Discard kernel address accesses */ + if (laddr & (1UL << 63)) { + count_vm_event(HWHINT_KERNEL_ADDR); + goto handled; + } + + /* Is phys addr valid? */ + if (ops_data3 & MSR_AMD64_IBSOPDATA3_PADDR_VALID) + rdmsrl(MSR_AMD64_IBSDCPHYSAD, paddr); + else { + count_vm_event(HWHINT_PADDR_INVALID); + goto handled; + } + + pfn = PHYS_PFN(paddr); + page = pfn_to_online_page(pfn); + if (!page) + goto handled; + + if (!PageLRU(page)) { + count_vm_event(HWHINT_NON_LRU); + goto handled; + } + + if (!ibs_push_sample(pfn, numa_node_id(), jiffies)) { + count_vm_event(HWHINT_BUFFER_FULL); + goto handled; + } + + irq_work_queue(&ibs_irq_work); + count_vm_event(HWHINT_USEFUL_SAMPLES); + +handled: + return NMI_HANDLED; +} + +static inline int get_ibs_lvt_offset(void) +{ + u64 val; + + rdmsrl(MSR_AMD64_IBSCTL, val); + if (!(val & IBSCTL_LVT_OFFSET_VALID)) + return -EINVAL; + + return val & IBSCTL_LVT_OFFSET_MASK; +} + +static void setup_APIC_ibs(void) +{ + int offset; + + offset = get_ibs_lvt_offset(); + if (offset < 0) + goto failed; + + if (!setup_APIC_eilvt(offset, 0, APIC_EILVT_MSG_NMI, 0)) + return; +failed: + pr_warn("IBS APIC setup failed on cpu #%d\n", + smp_processor_id()); +} + +static void clear_APIC_ibs(void) +{ + int offset; + + offset = get_ibs_lvt_offset(); + if (offset >= 0) + setup_APIC_eilvt(offset, 0, APIC_EILVT_MSG_FIX, 1); +} + +static int x86_amd_ibs_access_profile_startup(unsigned int cpu) +{ + setup_APIC_ibs(); + return 0; +} + +static int x86_amd_ibs_access_profile_teardown(unsigned int cpu) +{ + clear_APIC_ibs(); + return 0; +} + +static int __init ibs_access_profiling_init(void) +{ + if (!boot_cpu_has(X86_FEATURE_IBS)) { + pr_info("IBS capability is unavailable for access profiling\n"); + return 0; + } + + ibs_s = alloc_percpu_gfp(struct ibs_sample_pcpu, GFP_KERNEL | __GFP_ZERO); + if (!ibs_s) + return 0; + + INIT_WORK(&ibs_work, ibs_work_handler); + init_irq_work(&ibs_irq_work, ibs_irq_handler); + + /* Uses IBS Op sampling */ + ibs_config = IBS_OP_CNT_CTL | IBS_OP_ENABLE; + ibs_caps = cpuid_eax(IBS_CPUID_FEATURES); + if (ibs_caps & IBS_CAPS_ZEN4) + ibs_config |= IBS_OP_L3MISSONLY; + + register_nmi_handler(NMI_LOCAL, ibs_overflow_handler, 0, "ibs"); + + cpuhp_setup_state(CPUHP_AP_PERF_X86_AMD_IBS_STARTING, + "x86/amd/ibs_access_profile:starting", + x86_amd_ibs_access_profile_startup, + x86_amd_ibs_access_profile_teardown); + + pr_info("IBS setup for memory access profiling\n"); + return 0; +} + +arch_initcall(ibs_access_profiling_init); diff --git a/include/linux/pghot.h b/include/linux/pghot.h index d3d59b0c0cf6..20ea9767dbdd 100644 --- a/include/linux/pghot.h +++ b/include/linux/pghot.h @@ -2,6 +2,14 @@ #ifndef _LINUX_PGHOT_H #define _LINUX_PGHOT_H +#include <linux/types.h> + +#ifdef CONFIG_HWMEM_PROFILER +bool hwmem_access_profiler_inuse(void); +#else +static inline bool hwmem_access_profiler_inuse(void) { return false; } +#endif + /* Page hotness temperature sources */ enum pghot_src { PGHOT_HW_HINTS, diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 5b8fd93b55fd..67efbca9051c 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -193,6 +193,25 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGHOT_RECORD_HWHINTS, PGHOT_RECORD_PGTSCANS, PGHOT_RECORD_HINTFAULTS, +#ifdef CONFIG_HWMEM_PROFILER + HWHINT_NR_EVENTS, + HWHINT_KERNEL, + HWHINT_KTHREAD, + HWHINT_NON_LOAD_STORES, + HWHINT_DC_L2_HITS, + HWHINT_LOCAL_L3L1L2, + HWHINT_LOCAL_PEER_CACHE_NEAR, + HWHINT_FAR_CACHE_HITS, + HWHINT_DRAM_ACCESSES, + HWHINT_CXL_ACCESSES, + HWHINT_REMOTE_NODE, + HWHINT_LADDR_INVALID, + HWHINT_KERNEL_ADDR, + HWHINT_PADDR_INVALID, + HWHINT_NON_LRU, + HWHINT_BUFFER_FULL, + HWHINT_USEFUL_SAMPLES, +#endif /* CONFIG_HWMEM_PROFILER */ #endif /* CONFIG_PGHOT */ NR_VM_EVENT_ITEMS }; diff --git a/mm/Kconfig b/mm/Kconfig index fde5aee3e16f..07b16aece877 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1489,6 +1489,19 @@ config PGHOT_PRECISE 4 bytes per page against the default one byte per page. Preferable to enable this on systems with multiple nodes in toptier. +config HWMEM_PROFILER + bool "HW based memory access profiling" + def_bool n + depends on PGHOT + depends on X86_64 + help + Some hardware platforms are capable of providing memory access + information in direct and actionable manner. Instruction Based + Sampling (IBS) present on AMD Zen CPUs in one such example. + Memory accesses obtained via such HW based mechanisms are + rolled up to PGHOT sub-system for further action like hot page + promotion or NUMA Balancing + source "mm/damon/Kconfig" endmenu diff --git a/mm/vmstat.c b/mm/vmstat.c index f6f91b9dd887..62c47f44edf0 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1506,6 +1506,25 @@ const char * const vmstat_text[] = { [I(PGHOT_RECORD_HWHINTS)] = "pghot_recorded_hwhints", [I(PGHOT_RECORD_PGTSCANS)] = "pghot_recorded_pgtscans", [I(PGHOT_RECORD_HINTFAULTS)] = "pghot_recorded_hintfaults", +#ifdef CONFIG_HWMEM_PROFILER + [I(HWHINT_NR_EVENTS)] = "hwhint_nr_events", + [I(HWHINT_KERNEL)] = "hwhint_kernel", + [I(HWHINT_KTHREAD)] = "hwhint_kthread", + [I(HWHINT_NON_LOAD_STORES)] = "hwhint_non_load_stores", + [I(HWHINT_DC_L2_HITS)] = "hwhint_dc_l2_hits", + [I(HWHINT_LOCAL_L3L1L2)] = "hwhint_local_l3l1l2", + [I(HWHINT_LOCAL_PEER_CACHE_NEAR)] = "hwhint_local_peer_cache_near", + [I(HWHINT_FAR_CACHE_HITS)] = "hwhint_far_cache_hits", + [I(HWHINT_DRAM_ACCESSES)] = "hwhint_dram_accesses", + [I(HWHINT_CXL_ACCESSES)] = "hwhint_cxl_accesses", + [I(HWHINT_REMOTE_NODE)] = "hwhint_remote_node", + [I(HWHINT_LADDR_INVALID)] = "hwhint_invalid_laddr", + [I(HWHINT_KERNEL_ADDR)] = "hwhint_kernel_addr", + [I(HWHINT_PADDR_INVALID)] = "hwhint_invalid_paddr", + [I(HWHINT_NON_LRU)] = "hwhint_non_lru", + [I(HWHINT_BUFFER_FULL)] = "hwhint_buffer_full", + [I(HWHINT_USEFUL_SAMPLES)] = "hwhint_useful_samples", +#endif /* CONFIG_HWMEM_PROFILER */ #endif /* CONFIG_PGHOT */ #undef I #endif /* CONFIG_VM_EVENT_COUNTERS */ -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:39 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Enable IBS memory access data collection for user memory accesses by programming the required MSRs. The profiling is turned ON only for user mode execution and turned OFF for kernel mode execution. Profiling is explicitly disabled for NMI handler too. TODOs: - IBS sampling rate is kept fixed for now. - Arch/vendor separation/isolation of the code needs relook. Signed-off-by: Bharata B Rao <bharata@amd.com> --- arch/x86/include/asm/entry-common.h | 3 +++ arch/x86/include/asm/hardirq.h | 2 ++ arch/x86/mm/ibs.c | 32 +++++++++++++++++++++++++++++ include/linux/pghot.h | 4 ++++ 4 files changed, 41 insertions(+) diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index ce3eb6d5fdf9..0f381a63669e 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -4,6 +4,7 @@ #include <linux/randomize_kstack.h> #include <linux/user-return-notifier.h> +#include <linux/pghot.h> #include <asm/nospec-branch.h> #include <asm/io_bitmap.h> @@ -13,6 +14,7 @@ /* Check that the stack and regs on entry from user mode are sane. */ static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs) { + hwmem_access_profiling_stop(); if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) { /* * Make sure that the entry code gave us a sensible EFLAGS @@ -106,6 +108,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, static __always_inline void arch_exit_to_user_mode(void) { amd_clear_divider(); + hwmem_access_profiling_start(); } #define arch_exit_to_user_mode arch_exit_to_user_mode diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h index 6b6d472baa0b..e80c305c17d1 100644 --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -91,4 +91,6 @@ static __always_inline bool kvm_get_cpu_l1tf_flush_l1d(void) static __always_inline void kvm_set_cpu_l1tf_flush_l1d(void) { } #endif /* IS_ENABLED(CONFIG_KVM_INTEL) */ +#define arch_nmi_enter() hwmem_access_profiling_stop() +#define arch_nmi_exit() hwmem_access_profiling_start() #endif /* _ASM_X86_HARDIRQ_H */ diff --git a/arch/x86/mm/ibs.c b/arch/x86/mm/ibs.c index 752f688375f9..d0d93f09432d 100644 --- a/arch/x86/mm/ibs.c +++ b/arch/x86/mm/ibs.c @@ -16,6 +16,7 @@ static u64 ibs_config __read_mostly; static u32 ibs_caps; #define IBS_NR_SAMPLES 150 +#define IBS_SAMPLE_PERIOD 10000 /* * Basic access info captured for each memory access. @@ -43,6 +44,36 @@ struct ibs_sample_pcpu __percpu *ibs_s; static struct work_struct ibs_work; static struct irq_work ibs_irq_work; +void hwmem_access_profiling_stop(void) +{ + u64 ops_ctl; + + if (!hwmem_access_profiling) + return; + + rdmsrl(MSR_AMD64_IBSOPCTL, ops_ctl); + wrmsrl(MSR_AMD64_IBSOPCTL, ops_ctl & ~IBS_OP_ENABLE); +} + +void hwmem_access_profiling_start(void) +{ + u64 config = 0; + unsigned int period = IBS_SAMPLE_PERIOD; + + if (!hwmem_access_profiling) + return; + + /* Disable IBS for kernel thread */ + if (!current->mm) + goto out; + + config = (period >> 4) & IBS_OP_MAX_CNT; + config |= (period & IBS_OP_MAX_CNT_EXT_MASK); + config |= ibs_config; +out: + wrmsrl(MSR_AMD64_IBSOPCTL, config); +} + bool hwmem_access_profiler_inuse(void) { return hwmem_access_profiling; @@ -310,6 +341,7 @@ static int __init ibs_access_profiling_init(void) x86_amd_ibs_access_profile_startup, x86_amd_ibs_access_profile_teardown); + hwmem_access_profiling = true; pr_info("IBS setup for memory access profiling\n"); return 0; } diff --git a/include/linux/pghot.h b/include/linux/pghot.h index 20ea9767dbdd..603791183102 100644 --- a/include/linux/pghot.h +++ b/include/linux/pghot.h @@ -6,8 +6,12 @@ #ifdef CONFIG_HWMEM_PROFILER bool hwmem_access_profiler_inuse(void); +void hwmem_access_profiling_start(void); +void hwmem_access_profiling_stop(void); #else static inline bool hwmem_access_profiler_inuse(void) { return false; } +static inline void hwmem_access_profiling_start(void) {} +static inline void hwmem_access_profiling_stop(void) {} #endif /* Page hotness temperature sources */ -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:40 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
From: Kinsey Ho <kinseyho@google.com> Refactor the existing MGLRU page table walking logic to make it resumable. Additionally, introduce two hooks into the MGLRU page table walk: accessed callback and flush callback. The accessed callback is called for each accessed page detected via the scanned accessed bit. The flush callback is called when the accessed callback reports that a flush is required. This allows for processing pages in batches for efficiency. With a generalised page table walk, introduce a new scan function which repeatedly scans on the same young generation and does not add a new young generation. Signed-off-by: Kinsey Ho <kinseyho@google.com> Signed-off-by: Yuanchu Xie <yuanchu@google.com> Signed-off-by: Bharata B Rao <bharata@amd.com> --- include/linux/mmzone.h | 5 ++ mm/internal.h | 4 + mm/vmscan.c | 181 +++++++++++++++++++++++++++++++---------- 3 files changed, 145 insertions(+), 45 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 49c374064fc2..26350a4951ff 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -548,6 +548,8 @@ struct lru_gen_mm_walk { unsigned long seq; /* the next address within an mm to scan */ unsigned long next_addr; + /* called for each accessed pte/pmd */ + bool (*accessed_cb)(unsigned long pfn); /* to batch promoted pages */ int nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; /* to batch the mm stats */ @@ -555,6 +557,9 @@ struct lru_gen_mm_walk { /* total batched items */ int batched; int swappiness; + /* for the pmd under scanning */ + int nr_young_pte; + int nr_total_pte; bool force_scan; }; diff --git a/mm/internal.h b/mm/internal.h index e430da900430..426db1ae286f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -538,6 +538,10 @@ extern unsigned long highest_memmap_pfn; bool folio_isolate_lru(struct folio *folio); void folio_putback_lru(struct folio *folio); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); +void set_task_reclaim_state(struct task_struct *task, + struct reclaim_state *rs); +void lru_gen_scan_lruvec(struct lruvec *lruvec, unsigned long seq, + bool (*accessed_cb)(unsigned long), void (*flush_cb)(void)); #ifdef CONFIG_NUMA int user_proactive_reclaim(char *buf, struct mem_cgroup *memcg, pg_data_t *pgdat); diff --git a/mm/vmscan.c b/mm/vmscan.c index 670fe9fae5ba..02f3dd128638 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -289,7 +289,7 @@ static int sc_swappiness(struct scan_control *sc, struct mem_cgroup *memcg) continue; \ else -static void set_task_reclaim_state(struct task_struct *task, +void set_task_reclaim_state(struct task_struct *task, struct reclaim_state *rs) { /* Check for an overwrite */ @@ -3058,7 +3058,7 @@ static bool iterate_mm_list(struct lru_gen_mm_walk *walk, struct mm_struct **ite VM_WARN_ON_ONCE(mm_state->seq + 1 < walk->seq); - if (walk->seq <= mm_state->seq) + if (!walk->accessed_cb && walk->seq <= mm_state->seq) goto done; if (!mm_state->head) @@ -3484,16 +3484,14 @@ static void walk_update_folio(struct lru_gen_mm_walk *walk, struct folio *folio, } } -static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, - struct mm_walk *args) +static int walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, + struct mm_walk *args, bool *suitable) { int i; bool dirty; pte_t *pte; spinlock_t *ptl; unsigned long addr; - int total = 0; - int young = 0; struct folio *last = NULL; struct lru_gen_mm_walk *walk = args->private; struct mem_cgroup *memcg = lruvec_memcg(walk->lruvec); @@ -3501,19 +3499,24 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, DEFINE_MAX_SEQ(walk->lruvec); int gen = lru_gen_from_seq(max_seq); pmd_t pmdval; + int err = 0; pte = pte_offset_map_rw_nolock(args->mm, pmd, start & PMD_MASK, &pmdval, &ptl); - if (!pte) - return false; + if (!pte) { + *suitable = false; + return err; + } if (!spin_trylock(ptl)) { pte_unmap(pte); - return true; + *suitable = true; + return err; } if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) { pte_unmap_unlock(pte, ptl); - return false; + *suitable = false; + return err; } arch_enter_lazy_mmu_mode(); @@ -3522,8 +3525,9 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, unsigned long pfn; struct folio *folio; pte_t ptent = ptep_get(pte + i); + bool do_flush; - total++; + walk->nr_total_pte++; walk->mm_stats[MM_LEAF_TOTAL]++; pfn = get_pte_pfn(ptent, args->vma, addr, pgdat); @@ -3547,23 +3551,36 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, if (pte_dirty(ptent)) dirty = true; - young++; + walk->nr_young_pte++; walk->mm_stats[MM_LEAF_YOUNG]++; + + if (!walk->accessed_cb) + continue; + + do_flush = walk->accessed_cb(pfn); + if (do_flush) { + walk->next_addr = addr + PAGE_SIZE; + + err = -EAGAIN; + break; + } } walk_update_folio(walk, last, gen, dirty); last = NULL; - if (i < PTRS_PER_PTE && get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &end)) + if (!err && i < PTRS_PER_PTE && + get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &end)) goto restart; arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte, ptl); - return suitable_to_scan(total, young); + *suitable = suitable_to_scan(walk->nr_total_pte, walk->nr_young_pte); + return err; } -static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma, +static int walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma, struct mm_walk *args, unsigned long *bitmap, unsigned long *first) { int i; @@ -3576,6 +3593,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec); DEFINE_MAX_SEQ(walk->lruvec); int gen = lru_gen_from_seq(max_seq); + int err = 0; VM_WARN_ON_ONCE(pud_leaf(*pud)); @@ -3583,13 +3601,13 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area if (*first == -1) { *first = addr; bitmap_zero(bitmap, MIN_LRU_BATCH); - return; + return err; } i = addr == -1 ? 0 : pmd_index(addr) - pmd_index(*first); if (i && i <= MIN_LRU_BATCH) { __set_bit(i - 1, bitmap); - return; + return err; } pmd = pmd_offset(pud, *first); @@ -3603,6 +3621,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area do { unsigned long pfn; struct folio *folio; + bool do_flush; /* don't round down the first address */ addr = i ? (*first & PMD_MASK) + i * PMD_SIZE : *first; @@ -3639,6 +3658,17 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area dirty = true; walk->mm_stats[MM_LEAF_YOUNG]++; + if (!walk->accessed_cb) + goto next; + + do_flush = walk->accessed_cb(pfn); + if (do_flush) { + i = find_next_bit(bitmap, MIN_LRU_BATCH, i) + 1; + + walk->next_addr = (*first & PMD_MASK) + i * PMD_SIZE; + err = -EAGAIN; + break; + } next: i = i > MIN_LRU_BATCH ? 0 : find_next_bit(bitmap, MIN_LRU_BATCH, i) + 1; } while (i <= MIN_LRU_BATCH); @@ -3649,9 +3679,10 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area spin_unlock(ptl); done: *first = -1; + return err; } -static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, +static int walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, struct mm_walk *args) { int i; @@ -3663,6 +3694,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, unsigned long first = -1; struct lru_gen_mm_walk *walk = args->private; struct lru_gen_mm_state *mm_state = get_mm_state(walk->lruvec); + int err = 0; VM_WARN_ON_ONCE(pud_leaf(*pud)); @@ -3676,6 +3708,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, /* walk_pte_range() may call get_next_vma() */ vma = args->vma; for (i = pmd_index(start), addr = start; addr != end; i++, addr = next) { + bool suitable; pmd_t val = pmdp_get_lockless(pmd + i); next = pmd_addr_end(addr, end); @@ -3692,7 +3725,10 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, walk->mm_stats[MM_LEAF_TOTAL]++; if (pfn != -1) - walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); + err = walk_pmd_range_locked(pud, addr, vma, args, + bitmap, &first); + if (err) + return err; continue; } @@ -3701,33 +3737,51 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, if (!pmd_young(val)) continue; - walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); + err = walk_pmd_range_locked(pud, addr, vma, args, + bitmap, &first); + if (err) + return err; } if (!walk->force_scan && !test_bloom_filter(mm_state, walk->seq, pmd + i)) continue; + err = walk_pte_range(&val, addr, next, args, &suitable); + if (err && walk->next_addr < next && first == -1) + return err; + + walk->nr_total_pte = 0; + walk->nr_young_pte = 0; + walk->mm_stats[MM_NONLEAF_FOUND]++; - if (!walk_pte_range(&val, addr, next, args)) - continue; + if (!suitable) + goto next; walk->mm_stats[MM_NONLEAF_ADDED]++; /* carry over to the next generation */ update_bloom_filter(mm_state, walk->seq + 1, pmd + i); +next: + if (err) { + walk->next_addr = first; + return err; + } } - walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first); + err = walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first); - if (i < PTRS_PER_PMD && get_next_vma(PUD_MASK, PMD_SIZE, args, &start, &end)) + if (!err && i < PTRS_PER_PMD && + get_next_vma(PUD_MASK, PMD_SIZE, args, &start, &end)) goto restart; + + return err; } static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end, struct mm_walk *args) { - int i; + int i, err; pud_t *pud; unsigned long addr; unsigned long next; @@ -3745,7 +3799,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end, if (!pud_present(val) || WARN_ON_ONCE(pud_leaf(val))) continue; - walk_pmd_range(&val, addr, next, args); + err = walk_pmd_range(&val, addr, next, args); + if (err) + return err; if (need_resched() || walk->batched >= MAX_LRU_BATCH) { end = (addr | ~PUD_MASK) + 1; @@ -3766,40 +3822,48 @@ static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end, return -EAGAIN; } -static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) +static int try_walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) { + int err; static const struct mm_walk_ops mm_walk_ops = { .test_walk = should_skip_vma, .p4d_entry = walk_pud_range, .walk_lock = PGWALK_RDLOCK, }; - int err; struct lruvec *lruvec = walk->lruvec; - walk->next_addr = FIRST_USER_ADDRESS; + DEFINE_MAX_SEQ(lruvec); - do { - DEFINE_MAX_SEQ(lruvec); + err = -EBUSY; - err = -EBUSY; + /* another thread might have called inc_max_seq() */ + if (walk->seq != max_seq) + return err; - /* another thread might have called inc_max_seq() */ - if (walk->seq != max_seq) - break; + /* the caller might be holding the lock for write */ + if (mmap_read_trylock(mm)) { + err = walk_page_range(mm, walk->next_addr, ULONG_MAX, + &mm_walk_ops, walk); - /* the caller might be holding the lock for write */ - if (mmap_read_trylock(mm)) { - err = walk_page_range(mm, walk->next_addr, ULONG_MAX, &mm_walk_ops, walk); + mmap_read_unlock(mm); + } - mmap_read_unlock(mm); - } + if (walk->batched) { + spin_lock_irq(&lruvec->lru_lock); + reset_batch_size(walk); + spin_unlock_irq(&lruvec->lru_lock); + } - if (walk->batched) { - spin_lock_irq(&lruvec->lru_lock); - reset_batch_size(walk); - spin_unlock_irq(&lruvec->lru_lock); - } + return err; +} + +static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) +{ + int err; + walk->next_addr = FIRST_USER_ADDRESS; + do { + err = try_walk_mm(mm, walk); cond_resched(); } while (err == -EAGAIN); } @@ -4011,6 +4075,33 @@ static bool inc_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness return success; } +void lru_gen_scan_lruvec(struct lruvec *lruvec, unsigned long seq, + bool (*accessed_cb)(unsigned long), void (*flush_cb)(void)) +{ + struct lru_gen_mm_walk *walk = current->reclaim_state->mm_walk; + struct mm_struct *mm = NULL; + + walk->lruvec = lruvec; + walk->seq = seq; + walk->accessed_cb = accessed_cb; + walk->swappiness = MAX_SWAPPINESS; + + do { + int err = -EBUSY; + + iterate_mm_list(walk, &mm); + if (!mm) + break; + + walk->next_addr = FIRST_USER_ADDRESS; + do { + err = try_walk_mm(mm, walk); + cond_resched(); + flush_cb(); + } while (err == -EAGAIN); + } while (mm); +} + static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness, bool force_scan) { -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:41 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
From: Kinsey Ho <kinseyho@google.com> Introduce a new kernel daemon, klruscand, that periodically invokes the MGLRU page table walk. It leverages the new callbacks to gather access information and forwards it to pghot sub-system for promotion decisions. This benefits from reusing the existing MGLRU page table walk infrastructure, which is optimized with features such as hierarchical scanning and bloom filters to reduce CPU overhead. As an additional optimization to be added in the future, we can tune the scan intervals for each memcg. Signed-off-by: Kinsey Ho <kinseyho@google.com> Signed-off-by: Yuanchu Xie <yuanchu@google.com> [Reduced the scan interval to 500ms, KLRUSCAND to default n in config] Signed-off-by: Bharata B Rao <bharata@amd.com> --- mm/Kconfig | 8 ++++ mm/Makefile | 1 + mm/klruscand.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 119 insertions(+) create mode 100644 mm/klruscand.c diff --git a/mm/Kconfig b/mm/Kconfig index 07b16aece877..9e9eca8db8bf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1502,6 +1502,14 @@ config HWMEM_PROFILER rolled up to PGHOT sub-system for further action like hot page promotion or NUMA Balancing +config KLRUSCAND + bool "Kernel lower tier access scan daemon" + default n + depends on PGHOT && LRU_GEN_WALKS_MMU + help + Scan for accesses from lower tiers by invoking MGLRU to perform + page table walks. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 89f999647752..c68df497a063 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -153,3 +153,4 @@ obj-$(CONFIG_PGHOT) += pghot-precise.o else obj-$(CONFIG_PGHOT) += pghot-default.o endif +obj-$(CONFIG_KLRUSCAND) += klruscand.o diff --git a/mm/klruscand.c b/mm/klruscand.c new file mode 100644 index 000000000000..13a41b38d67d --- /dev/null +++ b/mm/klruscand.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include <linux/memcontrol.h> +#include <linux/kthread.h> +#include <linux/module.h> +#include <linux/vmalloc.h> +#include <linux/memory-tiers.h> +#include <linux/pghot.h> + +#include "internal.h" + +#define KLRUSCAND_INTERVAL 500 +#define BATCH_SIZE (2 << 16) + +static struct task_struct *scan_thread; +static unsigned long pfn_batch[BATCH_SIZE]; +static int batch_index; + +static void flush_cb(void) +{ + int i; + + for (i = 0; i < batch_index; i++) { + unsigned long pfn = pfn_batch[i]; + + pghot_record_access(pfn, NUMA_NO_NODE, PGHOT_PGTABLE_SCAN, jiffies); + + if (i % 16 == 0) + cond_resched(); + } + batch_index = 0; +} + +static bool accessed_cb(unsigned long pfn) +{ + WARN_ON_ONCE(batch_index == BATCH_SIZE); + + if (batch_index < BATCH_SIZE) + pfn_batch[batch_index++] = pfn; + + return batch_index == BATCH_SIZE; +} + +static int klruscand_run(void *unused) +{ + struct lru_gen_mm_walk *walk; + + walk = kzalloc(sizeof(*walk), + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); + if (!walk) + return -ENOMEM; + + while (!kthread_should_stop()) { + unsigned long next_wake_time; + long sleep_time; + struct mem_cgroup *memcg; + int flags; + int nid; + + next_wake_time = jiffies + msecs_to_jiffies(KLRUSCAND_INTERVAL); + + for_each_node_state(nid, N_MEMORY) { + pg_data_t *pgdat = NODE_DATA(nid); + struct reclaim_state rs = { 0 }; + + if (node_is_toptier(nid)) + continue; + + rs.mm_walk = walk; + set_task_reclaim_state(current, &rs); + flags = memalloc_noreclaim_save(); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + struct lruvec *lruvec = + mem_cgroup_lruvec(memcg, pgdat); + unsigned long max_seq = + READ_ONCE((lruvec)->lrugen.max_seq); + + lru_gen_scan_lruvec(lruvec, max_seq, accessed_cb, flush_cb); + cond_resched(); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + + memalloc_noreclaim_restore(flags); + set_task_reclaim_state(current, NULL); + memset(walk, 0, sizeof(*walk)); + } + + sleep_time = next_wake_time - jiffies; + if (sleep_time > 0 && sleep_time != MAX_SCHEDULE_TIMEOUT) + schedule_timeout_idle(sleep_time); + } + kfree(walk); + return 0; +} + +static int __init klruscand_init(void) +{ + struct task_struct *task; + + task = kthread_run(klruscand_run, NULL, "klruscand"); + + if (IS_ERR(task)) { + pr_err("Failed to create klruscand kthread\n"); + return PTR_ERR(task); + } + + scan_thread = task; + return 0; +} +module_init(klruscand_init); -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:42 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Unmapped page cache pages that end up in lower tiers don't get promoted easily. There were attempts to identify such pages and get them promoted as part of NUMA Balancing earlier [1]. The same idea is taken forward here by using folio_mark_accessed() as a source of hotness. Lower tier accesses from folio_mark_accessed() are reported to pghot sub-system for hotness tracking and subsequent promotion. TODO: Need a better naming for this hotness source. Need to better understand/evaluate the overhead of hotness info collection from this path. [1] https://lore.kernel.org/linux-mm/20250411221111.493193-1-gourry@gourry.net/ Signed-off-by: Bharata B Rao <bharata@amd.com> --- Documentation/admin-guide/mm/pghot.txt | 7 ++++++- include/linux/pghot.h | 5 +++++ include/linux/vm_event_item.h | 1 + mm/pghot-tunables.c | 7 +++++++ mm/pghot.c | 6 ++++++ mm/swap.c | 8 ++++++++ mm/vmstat.c | 1 + 7 files changed, 34 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/pghot.txt b/Documentation/admin-guide/mm/pghot.txt index b329e692ef89..c8eb61064247 100644 --- a/Documentation/admin-guide/mm/pghot.txt +++ b/Documentation/admin-guide/mm/pghot.txt @@ -23,9 +23,10 @@ Path: /sys/kernel/debug/pghot/ - 0: Hardware hints (value 0x1) - 1: Page table scan (value 0x2) - 2: Hint faults (value 0x4) + - 3: folio_mark_accessed (value 0x8) - Default: 0 (disabled) - Example: - # echo 0x7 > /sys/kernel/debug/pghot/enabled_sources + # echo 0xf > /sys/kernel/debug/pghot/enabled_sources Enables all sources. 2. **target_nid** @@ -82,3 +83,7 @@ Path: /proc/vmstat 4. **pghot_recorded_hintfaults** - Number of recorded accesses reported by NUMA Balancing based hotness source. + +5. **pghot_recorded_fma** + - Number of recorded accesses reported by folio_mark_accessed() + hotness source. diff --git a/include/linux/pghot.h b/include/linux/pghot.h index 603791183102..8cf9dfb5365a 100644 --- a/include/linux/pghot.h +++ b/include/linux/pghot.h @@ -19,6 +19,7 @@ enum pghot_src { PGHOT_HW_HINTS, PGHOT_PGTABLE_SCAN, PGHOT_HINT_FAULT, + PGHOT_FMA, }; #ifdef CONFIG_PGHOT @@ -36,6 +37,7 @@ void pghot_debug_init(void); DECLARE_STATIC_KEY_FALSE(pghot_src_hwhints); DECLARE_STATIC_KEY_FALSE(pghot_src_pgtscans); DECLARE_STATIC_KEY_FALSE(pghot_src_hintfaults); +DECLARE_STATIC_KEY_FALSE(pghot_src_fma); /* * Bit positions to enable individual sources in pghot/records_enabled @@ -45,6 +47,7 @@ enum pghot_src_enabled { PGHOT_HWHINTS_BIT = 0, PGHOT_PGTSCAN_BIT, PGHOT_HINTFAULT_BIT, + PGHOT_FMA_BIT, PGHOT_MAX_BIT }; @@ -52,6 +55,8 @@ enum pghot_src_enabled { #define PGHOT_PGTSCAN_ENABLED BIT(PGHOT_PGTSCAN_BIT) #define PGHOT_HINTFAULT_ENABLED BIT(PGHOT_HINTFAULT_BIT) #define PGHOT_SRC_ENABLED_MASK GENMASK(PGHOT_MAX_BIT - 1, 0) +#define PGHOT_FMA_ENABLED BIT(PGHOT_FMA_BIT) +#define PGHOT_SRC_ENABLED_MASK GENMASK(PGHOT_MAX_BIT - 1, 0) #define PGHOT_DEFAULT_FREQ_THRESHOLD 2 diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 67efbca9051c..ac1f28646b9c 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -193,6 +193,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGHOT_RECORD_HWHINTS, PGHOT_RECORD_PGTSCANS, PGHOT_RECORD_HINTFAULTS, + PGHOT_RECORD_FMA, #ifdef CONFIG_HWMEM_PROFILER HWHINT_NR_EVENTS, HWHINT_KERNEL, diff --git a/mm/pghot-tunables.c b/mm/pghot-tunables.c index 79afbcb1e4f0..11c7f742a1be 100644 --- a/mm/pghot-tunables.c +++ b/mm/pghot-tunables.c @@ -124,6 +124,13 @@ static void pghot_src_enabled_update(unsigned int enabled) else static_branch_disable(&pghot_src_hintfaults); } + + if (changed & PGHOT_FMA_ENABLED) { + if (enabled & PGHOT_FMA_ENABLED) + static_branch_enable(&pghot_src_fma); + else + static_branch_disable(&pghot_src_fma); + } } static ssize_t pghot_src_enabled_write(struct file *filp, const char __user *ubuf, diff --git a/mm/pghot.c b/mm/pghot.c index 6fc76c1eaff8..537f4af816ff 100644 --- a/mm/pghot.c +++ b/mm/pghot.c @@ -43,6 +43,7 @@ static unsigned int sysctl_pghot_promote_rate_limit = 65536; DEFINE_STATIC_KEY_FALSE(pghot_src_hwhints); DEFINE_STATIC_KEY_FALSE(pghot_src_pgtscans); DEFINE_STATIC_KEY_FALSE(pghot_src_hintfaults); +DEFINE_STATIC_KEY_FALSE(pghot_src_fma); #ifdef CONFIG_SYSCTL static const struct ctl_table pghot_sysctls[] = { @@ -113,6 +114,11 @@ int pghot_record_access(unsigned long pfn, int nid, int src, unsigned long now) return -EINVAL; count_vm_event(PGHOT_RECORD_HINTFAULTS); break; + case PGHOT_FMA: + if (!static_branch_likely(&pghot_src_fma)) + return -EINVAL; + count_vm_event(PGHOT_RECORD_FMA); + break; default: return -EINVAL; } diff --git a/mm/swap.c b/mm/swap.c index 2260dcd2775e..31a654b19844 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -37,6 +37,8 @@ #include <linux/page_idle.h> #include <linux/local_lock.h> #include <linux/buffer_head.h> +#include <linux/pghot.h> +#include <linux/memory-tiers.h> #include "internal.h" @@ -454,8 +456,14 @@ static bool lru_gen_clear_refs(struct folio *folio) */ void folio_mark_accessed(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); + if (folio_test_dropbehind(folio)) return; + + if (!node_is_toptier(pfn_to_nid(pfn))) + pghot_record_access(pfn, NUMA_NO_NODE, PGHOT_FMA, jiffies); + if (lru_gen_enabled()) { lru_gen_inc_refs(folio); return; diff --git a/mm/vmstat.c b/mm/vmstat.c index 62c47f44edf0..c4d90baf440b 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1506,6 +1506,7 @@ const char * const vmstat_text[] = { [I(PGHOT_RECORD_HWHINTS)] = "pghot_recorded_hwhints", [I(PGHOT_RECORD_PGTSCANS)] = "pghot_recorded_pgtscans", [I(PGHOT_RECORD_HINTFAULTS)] = "pghot_recorded_hintfaults", + [I(PGHOT_RECORD_FMA)] = "pghot_recorded_fma", #ifdef CONFIG_HWMEM_PROFILER [I(HWHINT_NR_EVENTS)] = "hwhint_nr_events", [I(HWHINT_KERNEL)] = "hwhint_kernel", -- 2.34.1
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:43 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Here is the first set of results from a microbenchmark: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-191,288-383 node 1 size: 128893 MB node 2 cpus: node 2 size: 257993 MB node distances: node 0 1 2 0: 10 32 50 1: 32 10 60 2: 255 255 10 Hotness sources --------------- NUMAB0 - Without NUMA Balancing in base case and with no source enabled in the patched case. No migrations occur. NUMAB2 - Existing hot page promotion for the base case and use of hint faults as source in the patched case. pgtscan - Klruscand (MGLRU based PTE A bit scanning) source hwhints - IBS as source Pghot by default promotes after two accesses but for NUMAB2 source, promotion is done after one access to match the base behaviour. (/sys/kernel/debug/pghot/freq_threshold=1) ============================================================== Scenario 1 - Enough memory in toptier and hence only promotion ============================================================== Multi-threaded application with 64 threads that access memory at 4K granularity repetitively and randomly. The number of accesses per thread and the randomness pattern for each thread are fixed beforehand. The accesses are divided into stores and loads in the ratio of 50:50. Benchmark threads run on Node 0, while memory is initially provisioned on CXL node 2 before the accesses start. Repetitive accesses results in lowertier pages becoming hot and kmigrated detecting and migrating them. The benchmark score is the time taken to finish the accesses in microseconds. The sooner it finishes the better it is. All the numbers shown below are average of 3 runs. Default mode - Time taken (microseconds, lower is better) --------------------------------------------------------- Source Base Pghot --------------------------------------------------------- NUMAB0 117,069,417 115,802,776 NUMAB2 102,918,471 103,378,828 pgtscan NA 110,203,286 hwhints NA 92,880,388 --------------------------------------------------------- Default mode - Pages migrated (pgpromote_success) --------------------------------------------------------- Source Base Pghot --------------------------------------------------------- NUMAB0 0 0 NUMAB2 2097147 2097131 pgtscan NA 2097130 hwhints NA 1706556 --------------------------------------------------------- Precision mode - Time taken (microseconds, lower is better) ----------------------------------------------------------- Source Base Pghot ----------------------------------------------------------- NUMAB0 117,069,417 115,078,527 NUMAB2 102,918,471 101,742,985 pgtscan NA 110,024,513 NA hwhints NA 101,163,603 NA ----------------------------------------------------------- Precision mode - Pages migrated (pgpromote_success) --------------------------------------------------- Source Base Pghot --------------------------------------------------- NUMAB0 0 0 NUMAB2 2097147 2097144 pgtscan NA 2097129 hwhints NA 1144304 --------------------------------------------------- - The NUMAB2 benchmark numbers and pgpromote_success numbers more or less match in base and patched case. - Though the pgtscan case promotes all possible pages, the benchmark number suffers. This source needs tuning. - Hwhints case is able to provide benchmark numbers similar to base NUMAB2 even with less number of migrations. - With both default and precision modes of pghot the benchmark behaves more or less similarly. ============================================================== Scenario 2 - Toptier memory overcommited, promotion + demotion ============================================================== Single threaded application that allocates memory on both DRAM and CXL nodes using mmap(MAP_POPULATE). Every 1G region of allocated memory on CXL node is accessed at 4K granularity randomly and repetitively to build up the notion of hotness in the 1GB region that is under access. This should drive promotion. For promotion to work successfully, the DRAM memory that has been provisioned (and not being accessed) should be demoted first. There is enough free memory in the CXL node to for demotions. In summary, this benchmark creates a memory pressure on DRAM node and does CXL memory accesses to drive both demotion and promotion. The number of accesses are fixed and hence, the quicker the accessed pages get promoted to DRAM, the sooner the benchmark is expected to finish. All the numbers shown below are average of 3 runs. DRAM-node = 1 CXL-node = 2 Initial DRAM alloc ratio = 75% Allocation-size = 171798691840 Initial DRAM Alloc-size = 128849018880 Initial CXL Alloc-size = 42949672960 Hot-region-size = 1073741824 Nr-regions = 160 Nr-regions DRAM = 120 (provisioned but not accessed) Nr-hot-regions CXL = 40 Access pattern = random Access granularity = 4096 Delay b/n accesses = 0 Load/store ratio = 50l50s THP used = no Nr accesses = 42949672960 Nr repetitions = 1024 Default mode - Time taken (microseconds, lower is better) ------------------------------------------------------ Source Base Pghot ------------------------------------------------------ NUMAB0 63,809,267 60,794,786 NUMAB2 67,541,601 62,376,991 pgtscan NA 67,902,126 hwhints NA 59,872,525 ------------------------------------------------------ Default mode - Pages migrated (pgpromote_success) ------------------------------------------------- Source Base Pghot ------------------------------------------------- NUMAB0 0 0 NUMAB2 179635 932693 (High R2R variation in base) pgtscan NA 27487 hwhints NA 274 --------------------------------------- Precision mode - Time taken (microseconds, lower is better) ------------------------------------------------------ Source Base Pghot ------------------------------------------------------ NUMAB0 63,809,267 64,553,914 NUMAB2 67,541,601 62,148,082 pgtscan NA 65,073,396 hwhints NA 59,958,655 ------------------------------------------------------ Precision mode - Pages migrated (pgpromote_success) --------------------------------------------------- Source Base Pghot --------------------------------------------------- NUMAB0 0 0 NUMAB2 179635 988360 (High R2R variaion in base) pgtscan NA 21418 (High R2R variation in patched) hwhints NA 174 (High R2R variation in patched) --------------------------------------------------- - The base case itself doesn't show any improvement in benchmark numbers due to hot page promotion. The same pattern is seen in pghot case with all the sources except hwhints. The benchmark itself may need tuning so that promotion helps. - There is a high run to run variation in the number of pages promoted in base case. - Most promotion attempts in base case fail because the NUMA hint fault latency is found to exceed the threshold value (default threshold is 1000ms) in majority of the promotion attempts. - Unlike base NUMAB2 where the hint fault latency is the difference between the PTE update time (during scanning) and the access time (hint fault), pghot uses a single latency threshold (4000ms in pghot-default and 5000ms in pghot-precise) for two purposes. 1. If the time difference between successive accesses are within the threshold, the page is marked as hot. 2. Later when kmigrated picks up the page for migration, it will migrate only if the difference between the current time and the time when the page was marked hot is with the threshold. Because of the above difference in behaviour, more number of pages get qualified for promotion compared to base NUMAB2.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 9 Feb 2026 08:55:44 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Numbers from redis-memtier benchmark: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-191,288-383 node 1 size: 128893 MB node 2 cpus: node 2 size: 257993 MB node distances: node 0 1 2 0: 10 32 50 1: 32 10 60 2: 255 255 10 Hotness sources --------------- NUMAB0 - Without NUMA Balancing in base case and with no source enabled in the patched case. No migrations occur. NUMAB2 - Existing hot page promotion for the base case and use of hint faults as source in the patched case. Pghot by default promotes after two accesses but for NUMAB2 source, promotion is done after one access to match the base behaviour. (/sys/kernel/debug/pghot/freq_threshold=1) ============================================================== Scenario 1 - Enough memory in toptier and hence only promotion ============================================================== In the setup phase, 64GB database is provisioned and explicitly moved to Node 2 by migrating redis-server's memory to Node 2. Memtier is run on Node 1. Parallel distribution, 50% of the keys accessed, each 4 times. 16 Threads 100 Connections per thread 77808 Requests per client ================================================================================================== Type Ops/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec -------------------------------------------------------------------------------------------------- Base, NUMAB0 Totals 225827.75 226.49746 225.27900 425.98300 454.65500 513106.09 -------------------------------------------------------------------------------------------------- Base, NUMAB2 Totals 254869.29 205.61759 216.06300 399.35900 454.65500 579091.74 -------------------------------------------------------------------------------------------------- pghot-default, NUMAB2 Totals 264229.35 202.81411 215.03900 393.21500 446.46300 600358.86 -------------------------------------------------------------------------------------------------- pghot-precise, NUMAB2 Totals 261136.17 203.32692 215.03900 391.16700 446.46300 593330.81 ================================================================================================== pgpromote_success ================================== Base, NUMAB0 0 Base, NUMAB2 10,435,178 pghot-default, NUMAB2 10,435,031 pghot-precise, NUMAB2 10,435,245 ================================== - There is a clear benefit of hot page promotion seen. Both base and pghot show similar benefits. - The number of pages promoted in both cases are more or less same. ============================================================== Scenario 2 - Toptier memory overcommited, promotion + demotion ============================================================== In the setup phase, 192GB database is provisioned. The database occupies Node 1 entirely(~128GB) and spills over to Node 2 (~64GB). Memtier is run on Node 1. Parallel distribution, 50% of the keys accessed, each 4 times. 16 Threads 100 Connections per thread 233424 Requests per client ================================================================================================== Type Ops/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec -------------------------------------------------------------------------------------------------- Base, NUMAB0 Totals 246474.55 211.90623 192.51100 370.68700 448.51100 560235.63 -------------------------------------------------------------------------------------------------- Base, NUMAB2 Totals 232790.88 221.18604 214.01500 419.83900 509.95100 529132.72 -------------------------------------------------------------------------------------------------- pghot-default, NUMAB2 Totals 241615.60 216.12761 210.94300 391.16700 475.13500 549191.27 -------------------------------------------------------------------------------------------------- pghot-precise, NUMAB2 Totals 238557.37 217.57630 207.87100 395.26300 471.03900 542239.92 ================================================================================================== pgpromote_success pgdemote_kswapd =============================================================== Base, NUMAB0 0 832,494 Base, NUMAB2 352,075 720,409 pghot-default, NUMAB2 25,865,321 26,154,984 pghot-precise, NUMAB2 25,525,429 25,838,095 =============================================================== - No clear benefit is seen with hot page promotion both in base and pghot case. - Most promotion attempts in base case fail because the NUMA hint fault latency is found to exceed the threshold value (default threshold of 1000ms) in majority of the promotion attempts. - Unlike base NUMAB2 where the hint fault latency is the difference between the PTE update time (during scanning) and the access time (hint fault), pghot uses a single latency threshold (4000ms in pghot-default and 5000ms in pghot-precise) for two purposes. 1. If the time difference between successive accesses are within the threshold, the page is marked as hot. 2. Later when kmigrated picks up the page for migration, it will migrate only if the difference between the current time and the time when the page was marked hot is with the threshold. Because of the above difference in behaviour, more number of pages get qualified for promotion compared to base NUMAB2.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 9 Feb 2026 09:00:48 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Here are Graph500 numbers for the hint fault source: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-191,288-383 node 1 size: 128893 MB node 2 cpus: node 2 size: 257993 MB node distances: node 0 1 2 0: 10 32 50 1: 32 10 60 2: 255 255 10 Hotness sources --------------- NUMAB0 - Without NUMA Balancing in base case and with no source enabled in the pghot case. No migrations occur. NUMAB2 - Existing hot page promotion for the base case and use of hint faults as source in the pghot case. Pghot by default promotes after two accesses but for NUMAB2 source, promotion is done after one access to match the base behaviour. (/sys/kernel/debug/pghot/freq_threshold=1) Graph500 details ---------------- Command: mpirun -n 128 --bind-to core --map-by core graph500/src/graph500_reference_bfs 28 16 After the graph creation, the processes are stopped and data is migrated to CXL node 2 before continuing so that BFS phase starts accessing lower tier memory. Total memory usage is slightly over 100GB and will fit within Node 0 and 1. Hence there is no memory pressure to induce demotions. ===================================================================================== Base Base pghot-default pghot-precise NUMAB0 NUMAB2 NUMAB2 NUMAB2 ===================================================================================== harmonic_mean_TEPS 5.10676e+08 7.56804e+08 5.92473e+08 7.47091e+08 mean_time 8.41027 5.67508 7.24915 5.74886 median_TEPS 5.11535e+08 7.24252e+08 5.63155e+08 7.71638e+08 max_TEPS 5.1785e+08 1.06051e+09 7.88018e+08 1.0504e+09 pgpromote_success 0 13557718 13737730 13734469 numa_pte_updates 0 26491591 26848847 26726856 numa_hint_faults 0 13558077 13882743 13798024 ===================================================================================== - The base case shows a good improvement with NUMAB2(48%) in harmonic_mean_TEPS. - The same improvement gets maintained with pghot-precise too (46%). - pghot-default mode doesn't show benefit even when achieving similar page promotion numbers. This mode doesn't track accessing NID and by default promotes to NID=0 which probably isn't all that beneficial as processes are running on both Node 0 and Node 1.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Wed, 11 Feb 2026 21:00:26 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: We should hold a folio reference before the above call which will isolate the folio from LRU. Otherwise we may hit VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio) in folio_isolate_lru(). I hit this only when running Graph500 benchmark and have fixed it in the github at: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv6-pre The numbers that I have posted for micro-benchmarks and redis-memtier are without this fix while Graph500 numbers are with this fix. Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Wed, 11 Feb 2026 21:10:23 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Wed, Feb 11, 2026 at 09:00:26PM +0530, Bharata B Rao wrote: Can you contextualize TEPS? Higher better? Higher worse? etc. Unfamiliar with this benchmark. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Wed, 11 Feb 2026 11:04:42 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Wed, Feb 11, 2026 at 09:00:26PM +0530, Bharata B Rao wrote: Lacking access-nid data, maybe it's better to select a random (or round-robin) node in the upper tier? That would at least approach 1/N accuracy in promotion for most access patterns. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Wed, 11 Feb 2026 11:06:57 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Wed, Feb 11, 2026 at 09:10:23PM +0530, Bharata B Rao wrote: Also relevant note from other work I'm doing, we may want a fast-out for zone-device folios here. We should not bother tracking those at all. (this may also become relevant for private-node memory as well, but I may try to generalize zone_device & private-node checks as the conditions are very similar). ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Wed, 11 Feb 2026 11:08:59 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 11-Feb-26 9:38 PM, Gregory Price wrote: Yes, zone device folios aren't not tracked by pghot. They get discarded by pghot_record_access() itself. Good. Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 12 Feb 2026 07:33:43 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 11-Feb-26 9:34 PM, Gregory Price wrote: In the Graph500 benchmark, higher TEPS (Traversed Edges Per Second) values are better. Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 12 Feb 2026 07:46:34 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 11-Feb-26 9:00 PM, Bharata B Rao wrote: These numbers are from scenario where demotion is present: ============================================= Over-committed scenario, promotion + demotion ============================================= Command: mpirun -n 128 --bind-to core --map-by core /home/bharata/benchmarks/graph500/src/graph500_reference_bfs 30 16 The scale factor of 30 results in around 400GB of memory being provisioned resulting in the data spilling over to CXL node. No explicit migration of data is done in this case unlike the previous case. ===================================================================================== Base Base pghot-default pghot-precise NUMAB0 NUMAB2 NUMAB2 NUMAB2 ===================================================================================== harmonic_mean_TEPS 9.28713e+08 7.90431e+08 7.32193e+08 7.81051e+08 mean_time 18.4984 21.7346 23.4634 21.9956 median_TEPS 9.25707e+08 7.86684e+08 7.27053e+08 7.82823e+08 max_TEPS 9.57632e+08 8.4758e+08 8.22172e+08 7.9889e+08 pgpromote_success 0 22846743 22807167 25994988 pgpromote_candidate 0 24628924 29436044 27029173 pgpromote_candidate_nrl 0 140921 220 38387 pgdemote_kswapd 0 41523110 45121134 50042594 numa_pte_updates 0 121904763 71503891 68779424 numa_hint_faults 0 81708126 29583391 27176332 ===================================================================================== - In the base case, the benchmark suffers when promotion and demotion are enabled (NUMAB2 case). - Same behaviour is seen with both modes of pghot. - Though the overall benchmark numbers remain more or less same with base and pghot NUMAB2 cases, the number of pte updates and hint faults are seen to spike up during some runs. Yet to understand the exact reason for this.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 12 Feb 2026 21:45:40 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Thu, Jan 29, 2026 at 08:10:33PM +0530, Bharata B Rao wrote: In the future can you add a base-commit: for the series? Make's it easier to automate pulling it in for testing and backports etc. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 13 Feb 2026 09:56:11 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 13-Feb-26 8:26 PM, Gregory Price wrote: Good suggestion, will do thanks. BTW this series applies on f0b9d8eb98df. Latest github branch: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv6-pre Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 16 Feb 2026 08:30:21 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Here are some numbers from NAS Parallel Benchmark (NPB) with BT application: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-191,288-383 node 1 size: 128893 MB node 2 cpus: node 2 size: 257993 MB node distances: node 0 1 2 0: 10 32 50 1: 32 10 60 2: 255 255 10 Hotness sources --------------- NUMAB0 - Without NUMA Balancing in base case and with no source enabled in the pghot case. No migrations occur. NUMAB2 - Existing hot page promotion for the base case and use of hint faults as source in the pghot case. Both promotion and demotion are enabled in this case. Pghot by default promotes after two accesses but for NUMAB2 source, promotion is done after one access to match the base behaviour. (/sys/kernel/debug/pghot/freq_threshold=1) NAS-BT details -------------- Command: mpirun -np 16 /usr/bin/numactl --cpunodebind=0,1 NPB3.4.4/NPB3.4-MPI/bin/bt.F.x While class D uses around 24G of memory (which is too less to show the benefit of promition), class E results in around 368G of memory which overflows my toptier. Hence I wanted something in between these classes. So I have modified class F to the problem size of 768 which results in around 160GB of memory. After the memory consumption stabilizes, all the rank PIDs are paused and their memory is moved to CXL node using migratepages command. This simulates the situation of memory residing on lower tier node and access by BT processes leading to promotion. Time in seconds - Lower is better Mop/s total - Higher is better ===================================================================================== Base Base pghot-default pghot-precise NUMAB0 NUMAB2 NUMAB2 NUMAB2 ===================================================================================== Time in seconds 7349.86 4422.50 6219.71 4113.56 Mop/s total 53247.66 88493.630 62923.030 95139.810 pgpromote_success 0 42181834 248503390 41955718 pgpromote_candidate 0 0 577086192 0 pgpromote_candidate_nrl 0 42181834 29410329 41956171 pgdemote_kswapd 0 0 216489010 0 numa_pte_updates 0 42252749 607470975 42037882 numa_hint_faults 0 42183772 606540729 41968150 ===================================================================================== - In the base case, the benchmark numbers improve significantly due to hot page promotion. - Though the benchmark runs for hundreds of minutes, the pages get promoted within the first few mins. - pghot-precise is able to match the base case numbers. - The benchmark suffers in pghot-default case due to promotion being limited to the default NID (0) only. This leads to excessive PTE updates, hint faults, demotion and promotion churn.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 23 Feb 2026 19:57:39 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Mon, Feb 23, 2026 at 07:57:39PM +0530, Bharata B Rao wrote: Wow, this really seems to justify the extra memory usage. Is it possible for you to change pghot-default to move the page to a random (or round-robin) node on the top tier instead of NID(0) by default? At least then pghot-default would be correct 1/N % of the time (in theory). I'd be curious to see how close it gets to NUMAB2 with that. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 23 Feb 2026 10:02:30 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 23-Feb-26 8:32 PM, Gregory Price wrote: For pghot-default, with target_nid alternating between the available toptier nodes 0 and 1, the numbers catch up with pghot-precise and base NUMAB2 case as seen below: ================================ Time in seconds 4337.98 Mop/s total 90217.86 pgpromote_success 42170085 pgpromote_candidate 0 pgpromote_candidate_nrl 42171963 pgdemote_kswapd 0 numa_pte_updates 42338538 numa_hint_faults 42185662 ================================ Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Tue, 24 Feb 2026 17:25:13 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Tue, Feb 24, 2026 at 05:25:13PM +0530, Bharata B Rao wrote: Fascinating! Thank you for the quick follow up. I wonder if this was a lucky run, it almost seems *too* perfect. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Tue, 24 Feb 2026 10:30:07 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 24-Feb-26 9:00 PM, Gregory Price wrote: It consistently performs that way. Here are the numbers from another run: ================================ Time in seconds 4329.22 Mop/s total 90400.27 pgpromote_success 41967282 pgpromote_candidate 0 pgpromote_candidate_nrl 41968339 pgdemote_kswapd 0 numa_pte_updates 42253854 numa_hint_faults 42019449 ================================ grep -E "pgpromote|pgdemote" /sys/devices/system/node/node0/vmstat pgpromote_success 20996597 pgpromote_candidate 0 pgpromote_candidate_nrl 41968339 (*) pgdemote_kswapd 0 pgdemote_direct 0 pgdemote_khugepaged 0 pgdemote_proactive 0 grep -E "pgpromote|pgdemote" /sys/devices/system/node/node1/vmstat pgpromote_success 20970685 pgpromote_candidate 0 pgpromote_candidate_nrl 0 pgdemote_kswapd 0 pgdemote_direct 0 pgdemote_khugepaged 0 pgdemote_proactive 0 (*) The round-robin b/n nodes 0 and 1 happens after this metric is attributed to the original default target_nid. Hence nrl metric gets populated for node 0 only.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Wed, 25 Feb 2026 10:05:58 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Thu, 29 Jan 2026 20:10:35 +0530 Bharata B Rao <bharata@amd.com> wrote: [...snip...] Hello Bharata, I hope you are doing well! Thank you for the series. I saw the numbers and they look great. I'm hoping to do some more testing myself as well : -) I'm also going through the series as well!! The single-folio case, migrate_misplaced_folio, has a guard here to check that the function performs more than just a migration, but a promotion. Specifically, it checks that the folio's node is not toptier, and the destination node is toptier. Should that also be included here? When this is called in kmigrated_walk_zone in the next patch, there is no check to make sure that the folios are actually on a lower tier, and the destination is on a higher tier. Maybe I'm missing something? But it wasn't entirely obvious to me that the migration is always a promotion. I want to note that we're also skipping the count_memcg_events, which I understand is much harder to do here becuase each folio might belong to a different memcg. Ying also noted this in his reply from v1 [1] but I don't think it ever got addressed. Anyways, thank you! I hope you have a great day! Joshua [1] https://lore.kernel.org/linux-mm/87a541e51s.fsf@DESKTOP-5N7EMDA/
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 12:40:59 -0800", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotion (pghot) with the following goals: - Unify hot page detection from multiple sources like hint faults, page table scans, hardware hints (AMD IBS). - Decouple detection from migration. - Centralize promotion logic via per-lower-tier-node kmigrated kernel thread. - Move promotion rate‑limiting and related logic used by numa_balancing=2 (current NUMA balancing–based promotion) from the scheduler to pghot for broader reuse. Currently, multiple kernel subsystems detect page accesses independently. This patchset consolidates accesses from these mechanisms by providing: - A common API for reporting page accesses. - Shared infrastructure for tracking hotness at PFN granularity. - Per-lower-tier-node kernel threads for promoting pages. Here is a brief summary of how this subsystem works: - Tracks frequency and last access time. - Additionally, the accessing NUMA node ID (NID) for each recorded access is also tracked in the precision mode. - These hotness parameters are maintained in a per-PFN hotness record within the existing mem_section data structure. - In default mode, one byte (u8) is used for hotness record. 5 bits are used to store time and bucketing scheme is used to represent a total access time up to 4s with HZ=1000. Default toptier NID (0) is used as the target for promotion which can be changed via debugfs tunable. - In precision mode, 4 bytes (u32) are used for each hotness record. 14 bits are used to store time which can represent around 16s with HZ=1000. - Classifies pages as hot based on configurable thresholds. - Pages classified as hot are marked as ready for migration using the ready bit. Both modes use MSB of the hotness record as ready bit. - Per-lower-tier-node kmigrated threads periodically scan the PFNs of lower-tier nodes, checking for the migration-ready bit to perform batched migrations. Interval between successive scans and batching value are configurable via debugfs tunables. Memory overhead --------------- Default mode: 1 byte per lower-tier PFN. For a 1TB lower-tier memory this amounts to 256MB overhead (assuming 4K pages) Precision mode: 4 bytes per lower-tier PFN. For a 1TB of lower memory this amounts to 1G overhead. Bit layout of hotness record ---------------------------- Default mode - Bits 0-1: Frequency (2bits, 4 access samples) - Bits 2-6: Bucketed time (5bits, up to 4s with HZ=1000) - Bit 7: Migration ready bit Precision mode - Bits 0-9: Target NID (10 bits) - Bits 10-12: Frequency (3bits, 8 access samples) - Bits 13-26: Time (14bits, up to 16s with HZ=1000) - Bits 27-30: Reserved - Bit 31: Migration ready bit Integrated sources ------------------ 1. IBS - Instruction Based Sampling, hardware based sampling mechanism present on AMD CPUs. 2. klruscand - PTE‑A bit scanning built on MGLRU’s walk helpers. 3. NUMA Balancing (Tiering mode) 4. folio_mark_accessed() - Page cache access tracking (unmapped page cache pages) Changes in v5 ============= - Significant reduction in memory overhead for storing per-PFN hotness data - Two modes of operation (default and precision mode). The code which is specific to each implementation is moved to its own individual file. - Many bug fixes, code cleanups and code reorganization. Results ======= TODO: Will post benchmark nubmers as reply to this patchset soon. This v5 patchset applies on top of upstream commit 4941a17751c9 and can be fetched from: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv5 v4: https://lore.kernel.org/linux-mm/20251206101423.5004-1-bharata@amd.com/ v3: https://lore.kernel.org/linux-mm/20251110052343.208768-1-bharata@amd.com/ v2: https://lore.kernel.org/linux-mm/20250910144653.212066-1-bharata@amd.com/ v1: https://lore.kernel.org/linux-mm/20250814134826.154003-1-bharata@amd.com/ v0: https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/ Bharata B Rao (7): mm: migrate: Allow misplaced migration without VMA mm: Hot page tracking and promotion mm: pghot: Precision mode for pghot mm: sched: move NUMA balancing tiering promotion to pghot x86: ibs: In-kernel IBS driver for memory access profiling x86: ibs: Enable IBS profiling for memory accesses mm: pghot: Add folio_mark_accessed() as hotness source Gregory Price (1): migrate: Add migrate_misplaced_folios_batch() Kinsey Ho (2): mm: mglru: generalize page table walk mm: klruscand: use mglru scanning for page promotion Documentation/admin-guide/mm/pghot.txt | 89 +++++ arch/x86/events/amd/ibs.c | 10 + arch/x86/include/asm/entry-common.h | 3 + arch/x86/include/asm/hardirq.h | 2 + arch/x86/include/asm/msr-index.h | 16 + arch/x86/mm/Makefile | 1 + arch/x86/mm/ibs.c | 349 +++++++++++++++++ include/linux/migrate.h | 6 + include/linux/mmzone.h | 26 ++ include/linux/pghot.h | 142 +++++++ include/linux/vm_event_item.h | 26 ++ kernel/sched/debug.c | 1 - kernel/sched/fair.c | 152 +------- mm/Kconfig | 46 +++ mm/Makefile | 7 + mm/huge_memory.c | 26 +- mm/internal.h | 4 + mm/klruscand.c | 110 ++++++ mm/memory.c | 31 +- mm/migrate.c | 41 +- mm/mm_init.c | 10 + mm/pghot-default.c | 73 ++++ mm/pghot-precise.c | 70 ++++ mm/pghot-tunables.c | 196 ++++++++++ mm/pghot.c | 505 +++++++++++++++++++++++++ mm/swap.c | 8 + mm/vmscan.c | 181 ++++++--- mm/vmstat.c | 26 ++ 28 files changed, 1917 insertions(+), 240 deletions(-) create mode 100644 Documentation/admin-guide/mm/pghot.txt create mode 100644 arch/x86/mm/ibs.c create mode 100644 include/linux/pghot.h create mode 100644 mm/klruscand.c create mode 100644 mm/pghot-default.c create mode 100644 mm/pghot-precise.c create mode 100644 mm/pghot-tunables.c create mode 100644 mm/pghot.c -- 2.34.1
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 27-Feb-26 2:10 AM, Joshua Hahn wrote: Thanks Joshua for looking at the patchset and for your testing offer! Ideally yes, but right now the batch variant gets called only for promotion case. Firstly the hotness is tracked only for lower tier pages. pghot_record_access() ensures this. Next, there is one kmigrated thread for each lower tier node and it looks at its own PFNs. This ensures that only lower tier PFNs are considered for promotion. Ying's suggestion about unifying single and batch versions of misplaced migration routines is in my TODO list. memcg accounting looks harder, I will give it a try. Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Fri, 27 Feb 2026 20:11:22 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
The aspeed video (be compatible for ast2400, ast2500, ast2600) now needs the reset DTS handle specified, otherwise it will fail to load: [ 0.000000] OF: reserved mem: initialized node video, compatible id shared-dma-pool [ 0.000000] OF: reserved mem: 0xbb000000..0xbeffffff (65536 KiB) map reusable video [ 0.377039] videodev: Linux video capture interface: v2.00 [ 4.809494] aspeed-video 1e700000.video: irq 57 [ 4.809977] aspeed-video 1e700000.video: Unable to get reset [ 4.810341] aspeed-video 1e700000.video: probe with driver aspeed-video failed with error -2 Fixes: e83f8dd668ea ("media: aspeed: Fix dram hang at res-change") Signed-off-by: Haiyue Wang <haiyuewa@163.com> --- arch/arm/boot/dts/aspeed/aspeed-g4.dtsi | 1 + arch/arm/boot/dts/aspeed/aspeed-g5.dtsi | 1 + arch/arm/boot/dts/aspeed/aspeed-g6.dtsi | 1 + include/dt-bindings/clock/ast2600-clock.h | 1 + 4 files changed, 4 insertions(+) diff --git a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi index c3d4d916c69b..1547e28d77e2 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi @@ -242,6 +242,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <7>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi index 39500bdb4747..793570ca2518 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi @@ -296,6 +296,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <7>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi index 189bc3bbb47c..3adf48987a17 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi @@ -428,6 +428,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/include/dt-bindings/clock/ast2600-clock.h b/include/dt-bindings/clock/ast2600-clock.h index f60fff261130..7b9b80c38a8b 100644 --- a/include/dt-bindings/clock/ast2600-clock.h +++ b/include/dt-bindings/clock/ast2600-clock.h @@ -124,6 +124,7 @@ #define ASPEED_RESET_PCIE_RC_OEN 18 #define ASPEED_RESET_MAC2 12 #define ASPEED_RESET_MAC1 11 +#define ASPEED_RESET_VIDEO 6 #define ASPEED_RESET_PCI_DP 5 #define ASPEED_RESET_HACE 4 #define ASPEED_RESET_AHB 1 -- 2.53.0
null
null
null
[PATCH v1] media: aspeed: Fix driver probe failure
On 27/02/2026 13:38, Haiyue Wang wrote: Please run scripts/checkpatch.pl on the patches and fix reported warnings. After that, run also 'scripts/checkpatch.pl --strict' on the patches and (probably) fix more warnings. Some warnings can be ignored, especially from --strict run, but the code here looks like it needs a fix. Feel free to get in touch if the warning is not clear. Please use subject prefixes matching the subsystem. You can get them for example with `git log --oneline -- DIRECTORY_OR_FILE` on the directory your patch is touching. For bindings, the preferred subjects are explained here: https://www.kernel.org/doc/html/latest/devicetree/bindings/submitting-patches.html#i-for-patch-submitters Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzk@kernel.org>", "date": "Fri, 27 Feb 2026 13:59:54 +0100", "is_openbsd": false, "thread_id": "54b36faa-62b3-4561-bfc9-0c507d9e148e@kernel.org.mbox.gz" }
lkml_critique
lkml
The aspeed video (be compatible for ast2400, ast2500, ast2600) now needs the reset DTS handle specified, otherwise it will fail to load: [ 0.000000] OF: reserved mem: initialized node video, compatible id shared-dma-pool [ 0.000000] OF: reserved mem: 0xbb000000..0xbeffffff (65536 KiB) map reusable video [ 0.377039] videodev: Linux video capture interface: v2.00 [ 4.809494] aspeed-video 1e700000.video: irq 57 [ 4.809977] aspeed-video 1e700000.video: Unable to get reset [ 4.810341] aspeed-video 1e700000.video: probe with driver aspeed-video failed with error -2 Fixes: e83f8dd668ea ("media: aspeed: Fix dram hang at res-change") Signed-off-by: Haiyue Wang <haiyuewa@163.com> --- arch/arm/boot/dts/aspeed/aspeed-g4.dtsi | 1 + arch/arm/boot/dts/aspeed/aspeed-g5.dtsi | 1 + arch/arm/boot/dts/aspeed/aspeed-g6.dtsi | 1 + include/dt-bindings/clock/ast2600-clock.h | 1 + 4 files changed, 4 insertions(+) diff --git a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi index c3d4d916c69b..1547e28d77e2 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi @@ -242,6 +242,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <7>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi index 39500bdb4747..793570ca2518 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi @@ -296,6 +296,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <7>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi index 189bc3bbb47c..3adf48987a17 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi @@ -428,6 +428,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/include/dt-bindings/clock/ast2600-clock.h b/include/dt-bindings/clock/ast2600-clock.h index f60fff261130..7b9b80c38a8b 100644 --- a/include/dt-bindings/clock/ast2600-clock.h +++ b/include/dt-bindings/clock/ast2600-clock.h @@ -124,6 +124,7 @@ #define ASPEED_RESET_PCIE_RC_OEN 18 #define ASPEED_RESET_MAC2 12 #define ASPEED_RESET_MAC1 11 +#define ASPEED_RESET_VIDEO 6 #define ASPEED_RESET_PCI_DP 5 #define ASPEED_RESET_HACE 4 #define ASPEED_RESET_AHB 1 -- 2.53.0
null
null
null
[PATCH v1] media: aspeed: Fix driver probe failure
On 2/27/2026 8:59 PM, Krzysztof Kozlowski wrote: Seperated into two patches in v2, please help to review.
{ "author": "Haiyue Wang <haiyuewa@163.com>", "date": "Fri, 27 Feb 2026 23:18:13 +0800", "is_openbsd": false, "thread_id": "54b36faa-62b3-4561-bfc9-0c507d9e148e@kernel.org.mbox.gz" }
lkml_critique
lkml
The aspeed video (be compatible for ast2400, ast2500, ast2600) now needs the reset DTS handle specified, otherwise it will fail to load: [ 0.000000] OF: reserved mem: initialized node video, compatible id shared-dma-pool [ 0.000000] OF: reserved mem: 0xbb000000..0xbeffffff (65536 KiB) map reusable video [ 0.377039] videodev: Linux video capture interface: v2.00 [ 4.809494] aspeed-video 1e700000.video: irq 57 [ 4.809977] aspeed-video 1e700000.video: Unable to get reset [ 4.810341] aspeed-video 1e700000.video: probe with driver aspeed-video failed with error -2 Fixes: e83f8dd668ea ("media: aspeed: Fix dram hang at res-change") Signed-off-by: Haiyue Wang <haiyuewa@163.com> --- arch/arm/boot/dts/aspeed/aspeed-g4.dtsi | 1 + arch/arm/boot/dts/aspeed/aspeed-g5.dtsi | 1 + arch/arm/boot/dts/aspeed/aspeed-g6.dtsi | 1 + include/dt-bindings/clock/ast2600-clock.h | 1 + 4 files changed, 4 insertions(+) diff --git a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi index c3d4d916c69b..1547e28d77e2 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi @@ -242,6 +242,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <7>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi index 39500bdb4747..793570ca2518 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi @@ -296,6 +296,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <7>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi index 189bc3bbb47c..3adf48987a17 100644 --- a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi +++ b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi @@ -428,6 +428,7 @@ video: video@1e700000 { <&syscon ASPEED_CLK_GATE_ECLK>; clock-names = "vclk", "eclk"; interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; + resets = <&syscon ASPEED_RESET_VIDEO>; status = "disabled"; }; diff --git a/include/dt-bindings/clock/ast2600-clock.h b/include/dt-bindings/clock/ast2600-clock.h index f60fff261130..7b9b80c38a8b 100644 --- a/include/dt-bindings/clock/ast2600-clock.h +++ b/include/dt-bindings/clock/ast2600-clock.h @@ -124,6 +124,7 @@ #define ASPEED_RESET_PCIE_RC_OEN 18 #define ASPEED_RESET_MAC2 12 #define ASPEED_RESET_MAC1 11 +#define ASPEED_RESET_VIDEO 6 #define ASPEED_RESET_PCI_DP 5 #define ASPEED_RESET_HACE 4 #define ASPEED_RESET_AHB 1 -- 2.53.0
null
null
null
[PATCH v1] media: aspeed: Fix driver probe failure
On 27/02/2026 16:18, Haiyue Wang wrote: No, please wait. One posting per day. We have enough of other patches to review. Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzk@kernel.org>", "date": "Fri, 27 Feb 2026 16:35:20 +0100", "is_openbsd": false, "thread_id": "54b36faa-62b3-4561-bfc9-0c507d9e148e@kernel.org.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-01-28 10:13:58 [+0000], K Prateek Nayak wrote: With the Debian config CONFIG_NODES_SHIFT is set to 10 as of 6.18.12+deb14 for amd64 probably due to MAXSMP. so we are getting slightly worse? I didn't want to do this because now we have two pointers to resolve, nr_node_ids vs nr_futex_queues should be largely the same. And I *think* the kernel image is mapped interleaved while the kcalloc() is from the current node (mostly #1). Having the huge array does not create any runtime overhead, it is just that we allocate 8KiB of memory here while 32 bytes for the 4 average nodes should be just fine. At least this is my imagination that 4 nodes the average upper limit. My question was initially is 1024 for max-nodes something that people really use. It was introduced as of https://lore.kernel.org/all/alpine.DEB.2.00.1003101537330.30724@chino.kir.corp.google.com/ but it looks odd. It might just one or two machines which are left :) Here we would have also two pointers and I don't think it is worth it. Having a statement that these machines are in the minority and not used by a wider range of people might convince Debian to lower the default. I haven't look into other distros but the MAXSMP on x86 will probably force the 10 there, too. Especially if *those* machines are used only by Google/ Amazon/ Oracle and they use their own kernel and not the Debian one. Maybe it would work to hide it behind MAXNUMA and keep the default for x86 at 6. Looking around, the range is also 1…10 on arm64 and riscv, too. Looking into the configs I see | boot/config-6.18.12+deb14-arm64:CONFIG_NODES_SHIFT=4 | boot/config-6.18.12+deb14-arm64-16k:CONFIG_NODES_SHIFT=4 | boot/config-6.18.12+deb14-loong64:CONFIG_NODES_SHIFT=6 | boot/config-6.18.12+deb14-powerpc64le:CONFIG_NODES_SHIFT=8 | boot/config-6.18.12+deb14-powerpc64le-64k:CONFIG_NODES_SHIFT=8 | boot/config-6.18.12+deb14-riscv64:CONFIG_NODES_SHIFT=2 While most look sane, loong64 looks odd as in an architecture this young already having 64 nodes by default. Not sure how much of this copy/ paste and how much is actual need. Sebastian
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Tue, 24 Feb 2026 12:13:42 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
Hello Sebastian, On 2/24/2026 4:43 PM, Sebastian Andrzej Siewior wrote: I have it on good faith that some EPYC user on distro kernels turn on "L3 as NUMA" option which currently results into 32 NUMA nodes on our largest configuration. Adding a little bit more margin for CXL nodes should make even CONFIG_NODES_SHIFT=6 pretty sane default for most real-word configs. I don't think we can go more than 10 or so CXL nodes considering the number of PCIe lanes unless there are more creative ways to attach tiered memory that appear as a NUMA node. I'm not sure if Intel has similar crazy combination but NODES_SHIFT=6 can accommodate (16 socket * SNC-3) + up to 16 CXL nodes so it should be fine for most distro users too? -- Thanks and Regards, Prateek
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Wed, 25 Feb 2026 09:06:08 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-02-25 09:06:08 [+0530], K Prateek Nayak wrote: Hi Prateek, Okay. According to Kconfig, this is the default for X86_64. The 10 gets set by MAXSMP. This option raises the NR_CPUS_DEFAULT to 8192. That might the overkill. What would be a sane value for NR_CPUS_DEFAULT? I don't have anything that exceeds 3 digits but I also don't have anything with more than 4 nodes ;) Sebastian
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Wed, 25 Feb 2026 08:39:39 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2/25/2026 1:09 PM, Sebastian Andrzej Siewior wrote: I would have thought a quarter of that would be plenty but looking at the footnote in [1] that says "16 socket GNR system" and the fact that GNR can feature up to 256 threads per socket - that could theoretically put such systems at that NR_CPUS_DEFAULT limit - I don't know if it is practically possible. [1] https://lore.kernel.org/lkml/aYPjOgiO_XsFWnWu@hpe.com/ Still, I doubt such setup would practically cross more than 64 nodes. Why was this selected as the default for MAXSMP? It came from [2] but I'm not really able to understand why other than this line in Mike's response: "MAXSMP" represents what's really usable so we just set it to the max of range to test for scalability? Seems little impractical for real-world cases but on the flip side if we don't sit it, some bits might not get enough testing? [2] https://lore.kernel.org/lkml/20080326014137.934171000@polaris-admin.engr.sgi.com/ And mine tops out at 32 nodes ;-) -- Thanks and Regards, Prateek
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Wed, 25 Feb 2026 14:21:33 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-02-25 14:21:33 [+0530], K Prateek Nayak wrote: Hi Prateek, I am still trying to figure out if this is practical or some drunk guys saying "you know what would be fun?" Sounds like it. What would be sane default upper limit then? Something like 1024 CPUs? 2048? Or even more than that? I would try to use this and convince Debian to drop MAXSMP and then lower NODES_SHIFT to default 6. I would need a default for NR_CPUS_DEFAULT without having people complaining about missing CPUs. Maybe we could get a sane default setting in kernel without testing limits. Also probably will compile two kernels to see how much memory this safes in total since there should be other data structures depending on max CPUs/ NODEs. Sebastian
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Wed, 25 Feb 2026 10:22:13 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
Hey Sebastian, Sorry for the delay! On 2/25/2026 2:52 PM, Sebastian Andrzej Siewior wrote: I feel the current default for NR_CPUS can be be retained as is just to be on the safer side. Turns out QEMU allows for a ridiculous amount of vCPUs per guest and I've found enough evidence of extremely large guests running oversubscribed that sometimes run distro kernels :-( *Theoretically* with SNC-3 and 16 sockets + CXL we can get close to the !MAXSMP limits for NODES_SHIFT (6) so perhaps we should drop it down a couple of notch from 10 as far as defaults are concerned to 8 - that should give us ample room for a long time in my opinion. Folks who are doing *insane* NUMA emulation can perhaps explain the use case or resort to building a kernel with a non-default NODES_SHIFT. To keep the configs as close as possible, I had to resort to selecting CONFIG_CPUMASK_OFFSTACK for !MAXSMP. Following was bloat-o-meter output with the reduced NODES_SHIFT on kernels built with very close to Ubuntu distro config: o NODES_SHIFT=8 : Total: Before=33017117, After=32109495, chg -2.75% o NODES_SHIFT=6 : Total: Before=33017117, After=31930101, chg -3.29% o NODES_SHIFT=6; NR_CPUS=4k : Total: Before=33017117, After=31196664, chg -5.51% o NODES_SHIFT=6; NR_CPUS=2k : Total: Before=33017117, After=30829862, chg -6.62% That last couple configs adds ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP. If I remove that dependency , I don't really see any change to the bloat-o-meter results so I don't think it makes much of a difference. Runtime memory consumption difference are within the noise range for me - I really couldn't see anything meaningful difference (or even a trend with multiple runs) between the extreme configs after boot. I haven't done any meaningful longer testing to pot anything. I'll let you decide what is a good trade off between space saving and future headaches :-) -- Thanks and Regards, Prateek
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Fri, 27 Feb 2026 14:17:31 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On Wed, Jan 28, 2026 at 10:13:58AM +0000, K Prateek Nayak wrote: Both will result in at least one extra deref/cacheline for each futex op, no?
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Fri, 27 Feb 2026 15:42:03 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
Hello Peter, On 2/27/2026 8:12 PM, Peter Zijlstra wrote: Ack but I was wondering if that penalty can be offset by the fact that we no longer need to look at "nr_node_ids" in a separate cacheline? I ran futex bench enough time before posting to come to conclusion that there isn't any noticeable regression - the numbers swung either ways and I just took one set for comparison. Sebastian and I have been having a more philosophical discussion on that CONFIG_NODES_SHIFT default but I guess as far as this patch is concerned, the conclusion is we want to avoid an extra dereference in the fast-path at the cost of a little bit extra space? -- Thanks and Regards, Prateek
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Fri, 27 Feb 2026 20:29:03 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex_init(). "nr_node_ids" at the time of futex_init() is cached as "nr_futex_queues" to compensate for the extra dereference necessary to access the elements of futex_queues which ends up in a different cacheline now. Running 5 runs of perf bench futex showed no measurable impact for any variants on a dual socket 3rd generation AMD EPYC system (2 x 64C/128T): variant locking/futex base + patch %diff futex/hash 1220783.2 1333296.2 (9%) futex/wake 0.71186 0.72584 (2%) futex/wake-parallel 0.00624 0.00664 (6%) futex/requeue 0.25088 0.26102 (4%) futex/lock-pi 57.6 57.8 (0%) Note: futex/hash had noticeable run to run variance on test machine. "nr_node_ids" can rarely be larger than num_possible_nodes() but the additional space allows for simpler handling of node index in presence of sparse node_possible_map. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> --- Sebastian, Does this work for your concerns with the large "MAX_NUMNODES" values on most distros? It does put the "queues" into a separate cacheline from the __futex_data. The other option is to dynamically allocate the entire __futex_data as: struct { unsigned long hashmask; unsigned int hashshift; unsigned int nr_queues; struct futex_hash_bucket *queues[] __counted_by(nr_queues); } *__futex_data __ro_after_init; with a variable length "queues" at the end if we want to ensure everything ends up in the same cacheline but all the __futex_data member access would then be pointer dereferencing which might not be ideal. Thoughts? --- kernel/futex/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 125804fbb5cb..d8567c2ca72a 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -56,11 +56,13 @@ static struct { unsigned long hashmask; unsigned int hashshift; - struct futex_hash_bucket *queues[MAX_NUMNODES]; + unsigned int nr_queues; + struct futex_hash_bucket **queues; } __futex_data __read_mostly __aligned(2*sizeof(long)); #define futex_hashmask (__futex_data.hashmask) #define futex_hashshift (__futex_data.hashshift) +#define nr_futex_queues (__futex_data.nr_queues) #define futex_queues (__futex_data.queues) struct futex_private_hash { @@ -439,10 +441,10 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph) * NOTE: this isn't perfectly uniform, but it is fast and * handles sparse node masks. */ - node = (hash >> futex_hashshift) % nr_node_ids; + node = (hash >> futex_hashshift) % nr_futex_queues; if (!node_possible(node)) { node = find_next_bit_wrap(node_possible_map.bits, - nr_node_ids, node); + nr_futex_queues, node); } } @@ -1987,6 +1989,10 @@ static int __init futex_init(void) size = sizeof(struct futex_hash_bucket) * hashsize; order = get_order(size); + nr_futex_queues = nr_node_ids; + futex_queues = kcalloc(nr_futex_queues, sizeof(*futex_queues), GFP_KERNEL); + BUG_ON(!futex_queues); + for_each_node(n) { struct futex_hash_bucket *table; base-commit: c42ba5a87bdccbca11403b7ca8bad1a57b833732 -- 2.34.1
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-02-27 14:17:31 [+0530], K Prateek Nayak wrote: Hi Prateek, No worries. You mean distro kernel in 8k CPUs guest? I do this kind of things for testing but not with a distro kernel. Oh well. So you are saying NODES_SHIFT=8 and NR_CPUS=4k is what should be default given "sane" upper limits as of today? I did hope for SHIFT 6 & NR=2k. Not sure this is worth fighting for. Thank you. Sebastian
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Fri, 27 Feb 2026 16:15:18 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
Hi all, After merging the drm-nova tree, today's linux-next build (arm64 allyesconfig) failed like this: ld: drivers/hid/hid-lenovo-go-s.o:(.data+0x840): multiple definition of `rgb_enabled'; drivers/hid/hid-lenovo-go.o:(.data+0xb00): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.data+0xa80): multiple definition of `touchpad_enabled'; drivers/hid/hid-lenovo-go.o:(.data+0xd00): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.data+0x900): multiple definition of `gamepad_mode'; drivers/hid/hid-lenovo-go.o:(.data+0xb40): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.bss+0x0): multiple definition of `drvdata'; drivers/hid/hid-lenovo-go.o:(.bss+0x0): first defined here Caused by commits: b53ccf3f72653c8a843188ffa2edd4bc2443686d HID: hid-lenovo-go: Add OS Mode Toggle 1d466a1adbf40e55501d766322d665de3a822b6e HID: hid-lenovo-go: Add Calibration Settings 557d5b34d52974bf4e43c459cbf50bed5615ead4 HID: hid-lenovo-go: Add RGB LED control interface f0119d450f1d4a5cc2ef2b38c2b522f902698a38 HID: hid-lenovo-go: Add FPS Mode DPI settings a8a9ca568ce547634e80e999013ac9f123acff1d HID: hid-lenovo-go: Add Rumble and Haptic Settings b2fd12c205b5a533ba2b1c5ffad669d08d52ce12 HID: hid-lenovo-go: Add Feature Status Attributes 3bb54f568ecc35be7675eef5303a47e14aba54bc HID: hid-lenovo-go: Add Lenovo Legion Go Series HID Driver I've left them for today but will take more action on Monday, probably reverts.
null
null
null
linux-next: build failure in the hid tree
On Fri, 27 Feb 2026, Mark Brown wrote: I'll just drop the branch from for-next for now, and will let Mark and Derek look into this and send followup fixes. Thanks, -- Jiri Kosina SUSE Labs
{ "author": "Jiri Kosina <jikos@kernel.org>", "date": "Fri, 27 Feb 2026 15:50:31 +0100 (CET)", "is_openbsd": false, "thread_id": "05q1sn9q-0075-303n-5q49-707o5p208083@xreary.bet.mbox.gz" }
lkml_critique
lkml
Hi all, After merging the drm-nova tree, today's linux-next build (arm64 allyesconfig) failed like this: ld: drivers/hid/hid-lenovo-go-s.o:(.data+0x840): multiple definition of `rgb_enabled'; drivers/hid/hid-lenovo-go.o:(.data+0xb00): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.data+0xa80): multiple definition of `touchpad_enabled'; drivers/hid/hid-lenovo-go.o:(.data+0xd00): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.data+0x900): multiple definition of `gamepad_mode'; drivers/hid/hid-lenovo-go.o:(.data+0xb40): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.bss+0x0): multiple definition of `drvdata'; drivers/hid/hid-lenovo-go.o:(.bss+0x0): first defined here Caused by commits: b53ccf3f72653c8a843188ffa2edd4bc2443686d HID: hid-lenovo-go: Add OS Mode Toggle 1d466a1adbf40e55501d766322d665de3a822b6e HID: hid-lenovo-go: Add Calibration Settings 557d5b34d52974bf4e43c459cbf50bed5615ead4 HID: hid-lenovo-go: Add RGB LED control interface f0119d450f1d4a5cc2ef2b38c2b522f902698a38 HID: hid-lenovo-go: Add FPS Mode DPI settings a8a9ca568ce547634e80e999013ac9f123acff1d HID: hid-lenovo-go: Add Rumble and Haptic Settings b2fd12c205b5a533ba2b1c5ffad669d08d52ce12 HID: hid-lenovo-go: Add Feature Status Attributes 3bb54f568ecc35be7675eef5303a47e14aba54bc HID: hid-lenovo-go: Add Lenovo Legion Go Series HID Driver I've left them for today but will take more action on Monday, probably reverts.
null
null
null
linux-next: build failure in the hid tree
On Fri, 27 Feb 2026, Jiri Kosina wrote: Seems like both drivers are polluting a lot of global namespace actually. I normally catch this using sparse, but my installation doesn't work currently because of [1], so I missed it. Derek, Mark -- you need to add a lot of 'static' all over the place :) The for-7.1/lenovo branch stays out of for-next for now, please send a fixed version and we'll put it in for-7.1/lenovo-v2. [1] https://lwn.net/Articles/1006379/ Thanks, -- Jiri Kosina SUSE Labs
{ "author": "Jiri Kosina <jikos@kernel.org>", "date": "Fri, 27 Feb 2026 16:28:21 +0100 (CET)", "is_openbsd": false, "thread_id": "05q1sn9q-0075-303n-5q49-707o5p208083@xreary.bet.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
No maintainers handling the code (so subsystem maintainers) are shown with scripts/get_maintainers.pl on MPAM drivers in drivers/resctrl/. It seems that there is no dedicated subsystem for resctrl and existing drivers went through ARM64 port maintainers, so make that explicit to avoid patches being lost/ignored. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- MAINTAINERS | 2 ++ 1 file changed, 2 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index faa914a5f34d..199058abc152 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3822,6 +3822,8 @@ S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git F: Documentation/arch/arm64/ F: arch/arm64/ +F: drivers/resctrl/*mpam_* +F: include/linux/arm_mpam.h F: drivers/virt/coco/arm-cca-guest/ F: drivers/virt/coco/pkvm-guest/ F: tools/testing/selftests/arm64/ -- 2.51.0
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Mon, 16 Feb 2026 12:02:42 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On Mon, Feb 16, 2026 at 12:02:42PM +0100, Krzysztof Kozlowski wrote: What's wrong with the current entry? $ ./scripts/get_maintainer.pl -f drivers/resctrl/mpam_* James Morse <james.morse@arm.com> (maintainer:MPAM DRIVER) Ben Horgan <ben.horgan@arm.com> (maintainer:MPAM DRIVER) Reinette Chatre <reinette.chatre@intel.com> (reviewer:MPAM DRIVER) Fenghua Yu <fenghuay@nvidia.com> (reviewer:MPAM DRIVER) linux-kernel@vger.kernel.org (open list) -- Catalin
{ "author": "Catalin Marinas <catalin.marinas@arm.com>", "date": "Wed, 18 Feb 2026 16:23:08 +0000", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On 18/02/2026 17:23, Catalin Marinas wrote: I explained in the commit msg: "No maintainers handling the code (so subsystem maintainers)" It does not list the maintainers picking up patches, so if you use standard tools (like b4, patman or scripted get_maintainers), you will never appear on To/Cc list (relying on git-fallback is wrong). Of course maybe you will still get the patches with lei/korgalore, so up to you. Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Wed, 18 Feb 2026 17:46:40 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On Wed, Feb 18, 2026 at 05:46:40PM +0100, Krzysztof Kozlowski wrote: Yeah, I realised what you meant after sending my reply ;). The arm64 maintainers won't proactively pick these patches up unless we are asked by the MPAM maintainers. I don't mind whether the patches go in via the arm64 or Greg's drivers tree. We just queued the first drop as it touched arm64. Let's see how it goes but I know little about MPAM, so just being on cc won't make much difference. I rely on the current MPAM maintainers to tell me what to merge. Thanks. -- Catalin
{ "author": "Catalin Marinas <catalin.marinas@arm.com>", "date": "Wed, 18 Feb 2026 17:13:45 +0000", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On 18/02/2026 18:13, Catalin Marinas wrote: OK, there are few subsystems (e.g. cdx) doing something similar - listing only reviewing maintainer, which later has to poke the actual maintainer picking up patches. I find it confusing practice and might lead to patches being lost on the mailing list (happened for example with cdx...), which is very poor contributor experience, but I understand you might not want to maintain them (same happened for cdx...). In that case MPAM folks, please kindly review and pick up only the first patch. Or tell me what should be done... Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Wed, 18 Feb 2026 18:21:26 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
Hi Krzysztof, On 2/16/26 11:02, Krzysztof Kozlowski wrote: This change looks good to me. As sparse is more broken I needed to use the patch from [1] to reproduce this. Copied here for convenience. diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 2b30a0529d48..90536b2bc42e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -14,8 +14,8 @@ struct vm_area_struct; struct mempolicy; /* Helper macro to avoid gfp flags if they are the default one */ -#define __default_gfp(a,...) a -#define default_gfp(...) __default_gfp(__VA_ARGS__ __VA_OPT__(,) GFP_KERNEL) +#define __default_gfp(a,b,...) b +#define default_gfp(...) __default_gfp(,##__VA_ARGS__,GFP_KERNEL) /* Convert GFP flags to their corresponding migrate type */ #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE) There is a kernel test robot report [2] and that asks for these tags: Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202512160133.eAzPdJv2-lkp@intel.com/ Acked-by: Ben Horgan <ben.horgan@arm.com> [1] https://lore.kernel.org/all/CAHk-=wijD-giccF6sJ+BdJpGDX9kPEUT6kryaQG0GRyJ3QQwng@mail.gmail.com/ [2] https://lore.kernel.org/all/202512160133.eAzPdJv2-lkp@intel.com/ Thanks, Ben
{ "author": "Ben Horgan <ben.horgan@arm.com>", "date": "Fri, 27 Feb 2026 14:06:27 +0000", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices.c:327:42: expected char [noderef] __iomem *buf test_mpam_devices.c:327:42: got void * test_mpam_devices.c:342:24: warning: cast removes address space '__iomem' of expression Cast the pointer to memory via __force to silence them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> --- drivers/resctrl/test_mpam_devices.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/resctrl/test_mpam_devices.c b/drivers/resctrl/test_mpam_devices.c index 3e8d564a0c64..2de41b47c138 100644 --- a/drivers/resctrl/test_mpam_devices.c +++ b/drivers/resctrl/test_mpam_devices.c @@ -324,7 +324,7 @@ static void test_mpam_enable_merge_features(struct kunit *test) static void test_mpam_reset_msc_bitmap(struct kunit *test) { - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); struct mpam_msc fake_msc = {}; u32 *test_result; @@ -339,7 +339,7 @@ static void test_mpam_reset_msc_bitmap(struct kunit *test) mutex_init(&fake_msc.part_sel_lock); mutex_lock(&fake_msc.part_sel_lock); - test_result = (u32 *)(buf + MPAMCFG_CPBM); + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); KUNIT_EXPECT_EQ(test, test_result[0], 0); -- 2.51.0
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On 27/02/2026 15:06, Ben Horgan wrote: The branch from Al Viro was working fine at that time, now merged to master. That I did not know. Anyone can run sparse, as I am doing every now and then, and find issues. Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 15:51:13 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
Andreas Hindborg <a.hindborg@kernel.org> writes: It just occured to me that we should probably add a zulip Link as well: Link: https://rust-for-linux.zulipchat.com/#narrow/channel/288089-General/topic/.E2.9C.94.20Constructing.20Mutex.20from.20PinInit.3CT.2C.20Error.3E/with/567385936 Best regards, Andreas Hindborg
{ "author": "Andreas Hindborg <a.hindborg@kernel.org>", "date": "Sat, 14 Feb 2026 14:37:12 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 2:28 PM CET, Andreas Hindborg wrote: I assume you have a user? I.e. do you need this patch in another tree? I think we should keep this bound and just add: E: From<E2>, This... ...and this match becomes unnecessary then.
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Sat, 14 Feb 2026 15:17:34 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 3:17 PM CET, Danilo Krummrich wrote: The `Into` trait bounds are the idiomatic ones for functions consuming things. See https://doc.rust-lang.org/std/convert/trait.Into.html: Prefer using Into over From when specifying trait bounds on a generic function to ensure that types that only implement Into can be used as well. I should've said something to Andreas, since he created the commit message (but this patch has left my L3 cache). Cheers, Benno
{ "author": "\"Benno Lossin\" <lossin@kernel.org>", "date": "Sat, 14 Feb 2026 15:40:21 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 3:40 PM CET, Benno Lossin wrote: Yeah, but isn't this only because of [1], which does not apply to the kernel because our minimum compiler version is 1.78 anyways? I.e. are there any cases where we can't implement From in the kernel and have to fall back to Into? [1] https://doc.rust-lang.org/std/convert/trait.Into.html#implementing-into-for-conversions-to-external-types-in-old-versions-of-rust
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Sat, 14 Feb 2026 15:56:43 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 3:56 PM CET, Danilo Krummrich wrote: Hmm that's interesting. I'm not sure if that's the only reason. It would be interesting to ask the Rust folks if there 1) is a different use-case for `Into` today; and 2) if they could remove `Into`, would they? If the answer to 2 is "yes", then we could think about doing that (or at least document it for us). Cheers, Benno
{ "author": "\"Benno Lossin\" <lossin@kernel.org>", "date": "Mon, 16 Feb 2026 00:29:45 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat, Feb 14, 2026 at 03:56:43PM +0100, Danilo Krummrich wrote: Probably not, but it's still best practice to use Into over From when specifying trait bounds. Alice
{ "author": "Alice Ryhl <aliceryhl@google.com>", "date": "Mon, 16 Feb 2026 08:48:43 +0000", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Mon Feb 16, 2026 at 9:48 AM CET, Alice Ryhl wrote: I'm aware; my point is that I'm questioning this best practice in the context of a modern and self-contained project like Rust in the kernel. This patch is a very good example, as there seem to be zero downsides to a From trait bound, while using the From trait bound allows for cleaner code. (I.e. we get rid of the matches and can use the '?' operator instead. To be fair, this could also be written as `.map_err(Into::into)?`, but still.) I'd even argue that using a From trait bound here is a feature and not a limitation. I.e. if someone would pass something that implements Into (but not From) we'd catch it and can tell the caller to implement From instead of Into, which is preferred and should always be possible in the kernel.
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Mon, 16 Feb 2026 10:37:08 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the initializer error type and require `AllocError: Into<E>` and `E2: Into<E>` instead. This allows the initializer to return a different error type that can be converted into the final error type, enabling use of infallible pin initializers in fallible allocation contexts. Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org> --- @Benno, I would like to add your SoB and CDB tags. --- rust/kernel/alloc/kbox.rs | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs index 622b3529edfcb..8dbc58b988f1c 100644 --- a/rust/kernel/alloc/kbox.rs +++ b/rust/kernel/alloc/kbox.rs @@ -323,7 +323,7 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// } /// /// // Allocate a boxed slice of 10 `Example`s. - /// let s = KBox::pin_slice( + /// let s = KBox::pin_slice::<_, _, Error, _>( /// | _i | Example::new(), /// 10, /// GFP_KERNEL @@ -333,24 +333,31 @@ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError> /// assert_eq!(s[3].d.lock().a, 20); /// # Ok::<(), Error>(()) /// ``` - pub fn pin_slice<Func, Item, E>( + pub fn pin_slice<Func, Item, E, E2>( mut init: Func, len: usize, flags: Flags, ) -> Result<Pin<Box<[T], A>>, E> where Func: FnMut(usize) -> Item, - Item: PinInit<T, E>, - E: From<AllocError>, + Item: PinInit<T, E2>, + AllocError: Into<E>, + E2: Into<E>, { - let mut buffer = super::Vec::<T, A>::with_capacity(len, flags)?; + let mut buffer = match super::Vec::<T, A>::with_capacity(len, flags) { + Ok(buffer) => buffer, + Err(err) => return Err(err.into()), + }; for i in 0..len { let ptr = buffer.spare_capacity_mut().as_mut_ptr().cast(); // SAFETY: // - `ptr` is a valid pointer to uninitialized memory. // - `ptr` is not used if an error is returned. // - `ptr` won't be moved until it is dropped, i.e. it is pinned. - unsafe { init(i).__pinned_init(ptr)? }; + match unsafe { init(i).__pinned_init(ptr) } { + Ok(()) => (), + Err(err) => return Err(err.into()), + } // SAFETY: // - `i + 1 <= len`, hence we don't exceed the capacity, due to the call to --- base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b change-id: 20260214-pin-slice-init-e8ef96fc07b9 Best regards, -- Andreas Hindborg <a.hindborg@kernel.org>
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 2:56 PM GMT, Danilo Krummrich wrote: There's one benefit in using `From` in trait bound -- you can call both `From::from` and `Into::into` inside the function. If you only have `Into` bound, then `From::from` is not callable. A very minor benefit, though. Another interesting observation is that `?` operator (i.e. impl of the unstable `FromResidual` trait on `Result`) uses `From` instead of `Into`. I cannot find a reason why this is done this way, though. Best, Gary
{ "author": "\"Gary Guo\" <gary@garyguo.net>", "date": "Fri, 27 Feb 2026 14:38:18 +0000", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }