data_type
large_stringclasses
3 values
source
large_stringclasses
29 values
code
large_stringlengths
98
49.4M
filepath
large_stringlengths
5
161
message
large_stringclasses
234 values
commit
large_stringclasses
234 values
subject
large_stringclasses
418 values
critique
large_stringlengths
101
1.26M
metadata
dict
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Add a simple smoke test for __GFP_UNMAPPED that tries to exercise flipping pageblocks between mapped/unmapped state. Also add some basic tests for some freelist-indexing helpers. Simplest way to run these on x86: tools/testing/kunit/kunit.py run --arch=x86_64 "page_alloc.*" \ --kconfig_add CONFIG_MERMAP=y --kconfig...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:44 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Relaying some code review from an AI that I wasn't able to run before sending... (This isn't the AI's verbatim output I'm filtering it and rephrasing). On Wed Feb 25, 2026 at 4:34 PM UTC, Brendan Jackman wrote: Oops, I forgot to account for @use_reserve here. The alloc-tracking structures should have a reservation ...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Fri, 27 Feb 2026 10:47:45 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
More bugs found by AI... (I am not relaying all of the issues, just the interesting ones/ones in the most interesting bits of code). On Wed Feb 25, 2026 at 4:34 PM UTC, Brendan Jackman wrote: Forgot to take the zone lock.
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Fri, 27 Feb 2026 10:56:35 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
On Wed Feb 25, 2026 at 4:34 PM UTC, Brendan Jackman wrote: Oh, that should be PAGE_KERNEL_NONGLOBAL. I think this means I never tested this path as this will crash - mermap_get_reserved() will WARN() and return NULL and then we'll dereference that below. Maybe that's a bad idea, maybe it should WARN() but serve the...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Fri, 27 Feb 2026 11:04:01 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
The anonymous page fault handler in do_anonymous_page() open-codes the sequence to map a newly allocated anonymous folio at the PTE level: - construct the PTE entry - add rmap - add to LRU - set the PTEs - update the MMU cache. Introduce a two helpers to consolidate this duplicated logic, mirroring the existing m...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Wed, 25 Feb 2026 18:29:25 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
In order to add mTHP support to khugepaged, we will often be checking if a given order is (or is not) a PMD order. Some places in the kernel already use this check, so lets create a simple helper function to keep the code clean and readable. Acked-by: David Hildenbrand (Arm) <david@kernel.org> Reviewed-by: Wei Yang <r...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Wed, 25 Feb 2026 18:29:26 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
The value (HPAGE_PMD_NR - 1) is used often in the khugepaged code to signify the limit of the max_ptes_* values. Add a define for this to increase code readability and reuse. Acked-by: Pedro Falcato <pfalcato@suse.de> Reviewed-by: Zi Yan <ziy@nvidia.com> Signed-off-by: Nico Pache <npache@redhat.com> --- mm/khugepaged...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Wed, 25 Feb 2026 18:29:27 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
The hpage_collapse functions describe functions used by madvise_collapse and khugepaged. remove the unnecessary hpage prefix to shorten the function name. Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Lance Yang <lance.yang@linux.dev> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by:...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Wed, 25 Feb 2026 18:29:28 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
The khugepaged daemon and madvise_collapse have two different implementations that do almost the same thing. Create collapse_single_pmd to increase code reuse and create an entry point to these two users. Refactor madvise_collapse and collapse_scan_mm_slot to use the new collapse_single_pmd function. This introduces ...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Wed, 25 Feb 2026 18:29:29 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On 2/26/26 02:29, Nico Pache wrote: One thing: You can also void passing in "nr_pages" here, especially when you query the order below, and simply do unsigned int order = folio_order(folio); map_anon_folio_pte_nopf(folio, pte, vma, addr, uffd_wp); add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1U << order); count_mthp_st...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Thu, 26 Feb 2026 10:27:54 +0100", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On 2/26/26 02:29, Nico Pache wrote: I'd call it "KHUGEPAGED_MAX_PTES_LIMIT", because it's khugepaged specific (no madvise) and matches the parameters. Apart from that Acked-by: David Hildenbrand (Arm) <david@kernel.org> -- Cheers, David
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Thu, 26 Feb 2026 10:28:13 +0100", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On 2/26/26 02:29, Nico Pache wrote: Probably best to drop Lorenzo's RB after bigger changes. !triggered_wb, right? On all paths below, you set "*mmap_locked = false". Why even bother about setting the variable? This might all read nicer without the goto and without the early return. /* If we have a THP in the ...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Thu, 26 Feb 2026 10:40:57 +0100", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On Thu, Feb 26, 2026 at 2:28 AM David Hildenbrand (Arm) <david@kernel.org> wrote: Ok before changing that, note that this is also leveraged in the mTHP set. It's technically used for madvise collapse because when it's not khugepaged we set max_ptes_none= 511. But I'm ok with either name! I just want to make sure it m...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Thu, 26 Feb 2026 13:17:19 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On Thu, Feb 26, 2026 at 2:24 AM Baolin Wang <baolin.wang@linux.alibaba.com> wrote: Yes! Thanks for catching that :) As David and others have pointed out, this lock handling here might be unnecessary and better placed in collapse_single_pmd(). I meant to look into that before posting this but it slipped my mind.
{ "author": "Nico Pache <npache@redhat.com>", "date": "Thu, 26 Feb 2026 13:20:57 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On Thu, Feb 26, 2026 at 2:41 AM David Hildenbrand (Arm) <david@kernel.org> wrote: Yeah I believe someone (Lorenzo?) pointed that out during the last review cycle. I forgot to look into it :< As you state, I believe we can drop the repetitive mmap_locked (iirc this was introduced in an earlier version before `lock_dro...
{ "author": "Nico Pache <npache@redhat.com>", "date": "Thu, 26 Feb 2026 13:27:45 -0700", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
The following series contains cleanups and prerequisites for my work on khugepaged mTHP support [1]. These have been separated out to ease review. The first patch in the series refactors the page fault folio to pte mapping and follows a similar convention as defined by map_anon_folio_pmd_(no)pf(). This not only cleans...
null
null
null
[PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites
On 2/26/26 21:17, Nico Pache wrote: It's more about disabling that parameter, right? -- Cheers, David
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Fri, 27 Feb 2026 09:52:52 +0100", "is_openbsd": false, "thread_id": "20260226012929.169479-1-npache@redhat.com.mbox.gz" }
lkml_critique
linux-mm
From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> There are two reasons to have the recorded PMD bail out from doing the following iteration 1. It is worth of doing such a trade off thing in terms of reclaiming efficiency as test_bloom_filter only consume 20~30 instructions in modern processors(25 instructions in ARM64...
null
null
null
[PATCH] mm: bail out when the PMD has been set in bloom filter
Hi zhaoyang.huang, kernel test robot noticed the following build warnings: [auto build test WARNING on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/zhaoyang-huang/mm-bail-out-when-the-PMD-has-been-set-in-bloom-filter/20260227-155729 base: https://git.kernel.org/pub/scm/linux/kernel...
{ "author": "kernel test robot <lkp@intel.com>", "date": "Fri, 27 Feb 2026 19:42:33 +0800", "is_openbsd": false, "thread_id": "202602271916.OBNa34QU-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
Architectures like PowerPC uses runtime defined values for PMD_ORDER/PUD_ORDER. This is because it can use either RADIX or HASH MMU at runtime using kernel cmdline. So the pXd_index_size is not known at compile time. Without this fix, when we add huge pfn support on powerpc in the next patch, vfio_pci_core driver compi...
null
null
null
[RFC v1 1/2] drivers/vfio_pci_core: Change PXD_ORDER check from switch case to if/else block
This uses _RPAGE_SW2 bit for the PMD and PUDs similar to PTEs. This also adds support for {pte,pmd,pud}_pgprot helpers needed for follow_pfnmap APIs. This allows us to extend the PFN mappings, e.g. PCI MMIO bars where it can grow as large as 8GB or even bigger, to map at PMD / PUD level. VFIO PCI core driver already s...
{ "author": "\"Ritesh Harjani (IBM)\" <ritesh.list@gmail.com>", "date": "Fri, 27 Feb 2026 11:46:37 +0530", "is_openbsd": false, "thread_id": "87pl5qh3ye.ritesh.list@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Architectures like PowerPC uses runtime defined values for PMD_ORDER/PUD_ORDER. This is because it can use either RADIX or HASH MMU at runtime using kernel cmdline. So the pXd_index_size is not known at compile time. Without this fix, when we add huge pfn support on powerpc in the next patch, vfio_pci_core driver compi...
null
null
null
[RFC v1 1/2] drivers/vfio_pci_core: Change PXD_ORDER check from switch case to if/else block
Le 27/02/2026 à 07:16, Ritesh Harjani (IBM) a écrit : Those braces are unneeded as all legs of the if/else are single lines ifdef could be replaced by IS_ENABLED() because PxD_ORDER and vmf_insert_pfn_xxx() are declared all the time 'else' is not needed because every 'if' leads to a return statement So at the e...
{ "author": "\"Christophe Leroy (CS GROUP)\" <chleroy@kernel.org>", "date": "Fri, 27 Feb 2026 07:42:03 +0100", "is_openbsd": false, "thread_id": "87pl5qh3ye.ritesh.list@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Architectures like PowerPC uses runtime defined values for PMD_ORDER/PUD_ORDER. This is because it can use either RADIX or HASH MMU at runtime using kernel cmdline. So the pXd_index_size is not known at compile time. Without this fix, when we add huge pfn support on powerpc in the next patch, vfio_pci_core driver compi...
null
null
null
[RFC v1 1/2] drivers/vfio_pci_core: Change PXD_ORDER check from switch case to if/else block
Le 27/02/2026 à 07:16, Ritesh Harjani (IBM) a écrit : Reviewed-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
{ "author": "\"Christophe Leroy (CS GROUP)\" <chleroy@kernel.org>", "date": "Fri, 27 Feb 2026 07:47:22 +0100", "is_openbsd": false, "thread_id": "87pl5qh3ye.ritesh.list@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Architectures like PowerPC uses runtime defined values for PMD_ORDER/PUD_ORDER. This is because it can use either RADIX or HASH MMU at runtime using kernel cmdline. So the pXd_index_size is not known at compile time. Without this fix, when we add huge pfn support on powerpc in the next patch, vfio_pci_core driver compi...
null
null
null
[RFC v1 1/2] drivers/vfio_pci_core: Change PXD_ORDER check from switch case to if/else block
"Christophe Leroy (CS GROUP)" <chleroy@kernel.org> writes: ^^^ PUD_ORDER Looks a lot cleaner. Thanks! I will make that change in v2. -ritesh
{ "author": "Ritesh Harjani (IBM) <ritesh.list@gmail.com>", "date": "Fri, 27 Feb 2026 16:00:54 +0530", "is_openbsd": false, "thread_id": "87pl5qh3ye.ritesh.list@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Architectures like PowerPC uses runtime defined values for PMD_ORDER/PUD_ORDER. This is because it can use either RADIX or HASH MMU at runtime using kernel cmdline. So the pXd_index_size is not known at compile time. Without this fix, when we add huge pfn support on powerpc in the next patch, vfio_pci_core driver compi...
null
null
null
[RFC v1 1/2] drivers/vfio_pci_core: Change PXD_ORDER check from switch case to if/else block
"Christophe Leroy (CS GROUP)" <chleroy@kernel.org> writes: Thanks for the review! In v2 - I will add above under #ifdef CONFIG_PPC_BOOK3S_64 to avoid build issues with 32-bit PPC. -ritesh
{ "author": "Ritesh Harjani (IBM) <ritesh.list@gmail.com>", "date": "Fri, 27 Feb 2026 16:02:25 +0530", "is_openbsd": false, "thread_id": "87pl5qh3ye.ritesh.list@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Linus, please merge this batch of hotfixes, thanks. The following changes since commit 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f: Linux 7.0-rc1 (2026-02-22 13:18:59 -0800) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm tags/mm-hotfixes-stable-2026-02-26-14-14 fo...
null
null
null
[GIT PULL] hotfixes for 7.0-rc2
The pull request you sent on Thu, 26 Feb 2026 14:16:33 -0800: has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/69062f234a2837d2302a41c2ba125521630deea8 Thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/prtracker.html
{ "author": "pr-tracker-bot@kernel.org", "date": "Fri, 27 Feb 2026 01:56:36 +0000", "is_openbsd": false, "thread_id": "177215739659.1937342.143122209456192930.pr-tracker-bot@kernel.org.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Tue, Feb 17, 2026 at 08:20:26PM +0530, Dev Jain wrote: Please don't use the term "enlighten". Tht's used to describe something something or other with hypervisors. Come up with a new term or use one that already exists. That's going to be messy. I don't have a good idea for solving this problem, but the page c...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Tue, 17 Feb 2026 15:22:38 +0000", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 2/17/26 16:22, Matthew Wilcox wrote: In a private conversation I also raised that some situations might make it impossible/hard to drop+re-read. One example I cam up with if a folio is simply long-term R/O pinned. But I am also not quite sure how mlock might interfere here. So yes, I think the page cache is lik...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Tue, 17 Feb 2026 16:30:59 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 17/02/2026 15:30, David Hildenbrand (Arm) wrote: Dev has a prototype up and running, but based on your comments, I'm guessing there is some horrible race that hasn't hit yet. Would be good to debug the gap in understanding at some point! I guess we could side step the problem for now, by initially requiring that ...
{ "author": "Ryan Roberts <ryan.roberts@arm.com>", "date": "Tue, 17 Feb 2026 15:51:05 +0000", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 17/02/26 8:52 pm, Matthew Wilcox wrote: Sure. Holding mapping->invalidate_lock, bumping mapping->min_folio_order and dropping-rereading the range suffers from a race - filemap_fault operating on some other partially populated 64K range will observe in filemap_get_folio that nothing is in the pagecache. Then, it w...
{ "author": "Dev Jain <dev.jain@arm.com>", "date": "Wed, 18 Feb 2026 14:09:23 +0530", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 18/02/26 2:09 pm, Dev Jain wrote: I may have been vague here... to avoid the race I described above, we must ensure that after all folios have been dropped from pagecache, and min order is bumped up, no other code path remembers the old order and partially populates a 64K range. For this we need synchronization.
{ "author": "Dev Jain <dev.jain@arm.com>", "date": "Wed, 18 Feb 2026 14:28:29 +0530", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 2/18/26 09:58, Dev Jain wrote: And I don't think you can reliably do that when other processes might be using the files concurrently. It's best to start like Ryan suggested: lifting min_order on these systems for now and leaving dynamically switching the min order as future work. -- Cheers, David
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Wed, 18 Feb 2026 10:15:14 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Tue, Feb 17, 2026 at 04:30:59PM +0100, David Hildenbrand (Arm) wrote: So what if we convert to max-supported-order the first time somebody calls mmap on a given file? Most files are never mmaped, so it won't affect them. And files that are mmaped are generally not written to. So there should not be much in the pa...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 20 Feb 2026 04:49:22 +0000", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Tue, Feb 17, 2026, at 16:22, Matthew Wilcox wrote: I think Loongarch and RISC-V are the candidates for doing whatever Arm does here. MIPS and PowerPC64 could do it in theory, but it's less clear that someone will spend the effort here. This would also be my main concern. There are hundreds of device drivers that ...
{ "author": "\"Arnd Bergmann\" <arnd@arndb.de>", "date": "Fri, 20 Feb 2026 10:49:33 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Tue, Feb 17, 2026 at 08:20:26PM +0530, Dev Jain wrote: Could this perhaps be because of larger-page-size kernels being able to use mTHP (and THP) more aggressively? It would be interesting to compare arm64 "4K" vs "4K with mTHP" vs "4K with _only_ mTHP" vs "64K" vs "64K with mTHP". I don't understand. What exactl...
{ "author": "Pedro Falcato <pfalcato@suse.de>", "date": "Fri, 20 Feb 2026 13:37:58 +0000", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 2/20/26 05:49, Matthew Wilcox wrote: Yes! Well, let's say many mmaped files are not written to. :) You'd assume many files to either get mmaped or read/written, yes. Is there some other way for someone to block a page from getting evicted from the pagecache? We have this memfd_pin_folios() thing, but I don'...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Fri, 20 Feb 2026 17:50:44 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
The mm->pgd will be the software pagetable. So suppose that do_anonymous_page is doing set_ptes on the PTE table belonging to the software pagetable. We will hook a "native_set_ptes" into set_ptes, which will set the ptes on a different pagetable maintained by arm64 code (probably mm_context_t->native_pgd). I didn't ...
{ "author": "Dev Jain <dev.jain@arm.com>", "date": "Mon, 23 Feb 2026 10:37:55 +0530", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Mon, Feb 23, 2026 at 10:37:55AM +0530, Dev Jain wrote: Traditionally, you do this kind of funky manipulation in update_mmu_cache. But this is still an extremely complex and invasive change (that I assume most people would not like to see) with dubious benefit. I'm not talking about CPU runtime efficiency, but me...
{ "author": "Pedro Falcato <pfalcato@suse.de>", "date": "Mon, 23 Feb 2026 12:49:18 +0000", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 2/23/26 13:49, Pedro Falcato wrote: I've been thinking about building the 64k page tables similar to how HMM/KVM handles it, invalidating them through mmu notifiers etc and building them on demand. Considering the 64k MMU of a process just like a special device that builds its own page tables. This way, they c...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Mon, 23 Feb 2026 14:01:53 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Fri 20-02-26 17:50:44, David Hildenbrand (Arm) via Lsf-pc wrote: Standard splice copies data first (it's using standard IO callbacks such as ->read_iter) so that doesn't pin page cache AFAICT. Only vmsplice(2) does but that requires mmap. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Mon, 23 Feb 2026 14:02:52 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Mon, Feb 23, 2026 at 10:37:55AM +0530, Dev Jain wrote: this goes over 80 columns so much and so often, it's painful to read. so i didn't.
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Mon, 23 Feb 2026 15:18:42 +0000", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 2/23/26 16:18, Matthew Wilcox wrote: I just found out that Thunderbird was lying to me the whole time. If you're using "Toggle Line Wrap" plugin you might think that mails are properly wrapped, you know, like *they are displayed*. And even lore displays them properly. But in the back, Thunderbird set "format=flow...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Mon, 23 Feb 2026 17:28:41 +0100", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 23/02/26 9:58 pm, David Hildenbrand (Arm) wrote: Thanks for letting me know about Toggle Line Wrap. This works, along with mailnews.wraplength. If I set this to 0, which is what email-clients.rst sugggests, it doesn't work. Thunderbird is confusing.
{ "author": "Dev Jain <dev.jain@arm.com>", "date": "Tue, 24 Feb 2026 10:02:25 +0530", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Tue, Feb 17, 2026 at 6:50 AM Dev Jain <dev.jain@arm.com> wrote: Hi Dev, Ryan, I'd be very interested in joining this discussion at LSF/MM. On Android, we have a separate but very related use case: we emulate a larger userspace page size on x86, primarily to allow app developers to test their apps for 16KB compati...
{ "author": "Kalesh Singh <kaleshsingh@google.com>", "date": "Wed, 25 Feb 2026 23:40:35 -0800", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On 26/02/26 1:10 pm, Kalesh Singh wrote: Thanks Kalesh for your interest! You did mention in the other email the links below, and I went ahead to compare :) I was puzzled to see some sort of VMA padding approach in your patches. OTOH our approach pads anonymous pages. So for example, if a 64K process maps a 12K size...
{ "author": "Dev Jain <dev.jain@arm.com>", "date": "Thu, 26 Feb 2026 14:15:13 +0530", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi everyone, We propose per-process page size on arm64. Although the proposal is for arm64, perhaps the concept can be extended to other arches, thus the generic topic name. ------------- INTRODUCTION ------------- While mTHP has brought the performance of many workloads running on an arm64 4K kernel closer to that o...
null
null
null
[LSF/MM/BPF TOPIC] Per-process page size
On Thu, Feb 26, 2026 at 12:45 AM Dev Jain <dev.jain@arm.com> wrote: Ah, the VMA padding patches you saw are actually for a different feature. To handle the file mapping overhang, we currently insert a separate anonymous VMA to cover the remainder of the emulated page range. Tough I think your approach of returning VM...
{ "author": "Kalesh Singh <kaleshsingh@google.com>", "date": "Thu, 26 Feb 2026 21:11:22 -0800", "is_openbsd": false, "thread_id": "CAC_TJvcvybHqVAV8nAHEvN-UXUQ5hMjZx+_b2W3MY=xgqR9=6w@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On Mon, Feb 23, 2026 at 04:58:09PM +0900, Harry Yoo wrote: Vlastimil, could you please update the changelog when applying this to the tree? I think this also explains [3] (thanks for raising it off-list, Vlastimil!): When alloc_slab_obj_exts() is called later (instead of during slab allocation and initialization), sl...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Mon, 23 Feb 2026 20:44:48 +0900", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On Mon, Feb 23, 2026 at 04:58:09PM +0900, Harry Yoo wrote: Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
{ "author": "Shakeel Butt <shakeel.butt@linux.dev>", "date": "Mon, 23 Feb 2026 12:23:48 -0800", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On 23/02/26 1:28 pm, Harry Yoo wrote: Thanks for the patch. I did ran the complete test suite, and unfortunately issue is reproducing. I applied this patch on mainline repo for testing. Traces: [ 9316.514161] BUG: Kernel NULL pointer dereference on read at 0x00000000 [ 9316.514169] Faulting instruction address: 0...
{ "author": "Venkat Rao Bagalkote <venkat88@linux.ibm.com>", "date": "Tue, 24 Feb 2026 14:34:41 +0530", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On Tue, Feb 24, 2026 at 02:34:41PM +0530, Venkat Rao Bagalkote wrote: Oops, thanks for confirming that it's still reproduced! That's really helpful. Perhaps I should start considering cases where it's not a memory ordering issue, but let's check one more thing before moving on. could you please test if it still repro...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Tue, 24 Feb 2026 20:10:18 +0900", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On 24/02/26 4:40 pm, Harry Yoo wrote: With this patch, issue is not reproduced. So looks good. Regards, Venkat.
{ "author": "Venkat Rao Bagalkote <venkat88@linux.ibm.com>", "date": "Wed, 25 Feb 2026 14:44:24 +0530", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On Wed, Feb 25, 2026 at 02:44:24PM +0530, Venkat Rao Bagalkote wrote: [...] Thanks a lot, Venkat! That's really helpful. I think that's enough signal to assume that memory ordering is playing a role here, unless it happens to be masking another issue. Even so, it's important to enforce the ordering anyway. But hav...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Wed, 25 Feb 2026 19:15:19 +0900", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
Hi Venkat, could you please help testing this patch and check if it hits any warning? It's based on v7.0-rc1 tag. This (hopefully) should give us more information that would help debugging the issue. 1. set stride early in alloc_slab_obj_exts_early() 2. move some obj_exts helpers to slab.h 3. in slab_obj_ext(), check...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Fri, 27 Feb 2026 12:07:33 +0900", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
Hi Harry, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-slab-a-debug-patch-to-investigate-the-issue-further/20260227-111246 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git ...
{ "author": "kernel test robot <lkp@intel.com>", "date": "Fri, 27 Feb 2026 13:52:18 +0800", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
Hi Harry, kernel test robot noticed the following build errors: [auto build test ERROR on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-slab-a-debug-patch-to-investigate-the-issue-further/20260227-111246 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git ...
{ "author": "kernel test robot <lkp@intel.com>", "date": "Fri, 27 Feb 2026 14:02:59 +0800", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On 27/02/26 8:37 am, Harry Yoo wrote: Hello Harry, I’ve restarted the test, but there are continuous warning prints in the logs, and they appear to be slowing down the test run significantly. Warnings: [ 3215.419760] obj_ext in object [ 3215.419774] WARNING: mm/slab.h:710 at slab_obj_ext+0x2e0/0x338, CPU#26: gr...
{ "author": "Venkat Rao Bagalkote <venkat88@linux.ibm.com>", "date": "Fri, 27 Feb 2026 13:32:29 +0530", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On Fri, Feb 27, 2026 at 01:32:29PM +0530, Venkat Rao Bagalkote wrote: Hello Venkat! Thanks :) It's okay! the purpose of this patch is to see if there's any warning hitting, rather than triggering the kernel crash. The patch adds five different warnings: 1) "obj_exts array in leftover" 2) "obj_ext in object" 3) ...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Fri, 27 Feb 2026 17:11:53 +0900", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
When alloc_slab_obj_exts() is called later in time (instead of at slab allocation & initialization step), slab->stride and slab->obj_exts are set when the slab is already accessible by multiple CPUs. The current implementation does not enforce memory ordering between slab->stride and slab->obj_exts. However, for corre...
null
null
null
[PATCH] mm/slab: initialize slab->stride early to avoid memory ordering issues
On 27/02/26 1:41 pm, Harry Yoo wrote: I’m continuing to see only warning (2) – “obj_ext in object”, but it is being triggered from multiple different callers. So far I have observed the warning originating from the following call paths: kfree → seq_release_private → proc_map_release → __fput kfree → seq_release...
{ "author": "Venkat Rao Bagalkote <venkat88@linux.ibm.com>", "date": "Fri, 27 Feb 2026 15:06:08 +0530", "is_openbsd": false, "thread_id": "69961f16-3c2e-4734-9ddf-2d406a57c7d1@linux.ibm.com.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Tue, Feb 24, 2026 at 10:52:28AM +0800, Ming Lei wrote: Hi Ming, thanks for the report! Ouch. Why did it crash? Thanks for such a detailed steps to reproduce :) That's pretty severe contention. Interestingly, the profile shows a severe contention on the alloc path, but I don't see free path here. wondering why...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Tue, 24 Feb 2026 14:00:15 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Tue, Feb 24, 2026 at 10:52:28AM +0800, Ming Lei wrote: Thanks for testing. As Harry said, this is odd. Could you post crash logs? Based on my earlier test results, this performance regression (more precisely, I suspect it is an expected return to the previous baseline - see below) should have been introduced by...
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Tue, 24 Feb 2026 14:51:26 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Tue, Feb 24, 2026 at 02:51:26PM +0800, Hao Li wrote: There's one difference here; you used will-it-scale mmap2 test case that involves maple tree node and vm_area_struct cache that already has sheaves enabled in v6.19. And Ming's benchmark stresses bio-<size> caches. Since other caches don't have sheaves in v6.19...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Tue, 24 Feb 2026 16:10:43 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Tue, Feb 24, 2026 at 04:10:43PM +0900, Harry Yoo wrote: Oh, yes-you're right. That distinction is important! I think I've gotten a bit stuck in a fixed way of thinking... Thanks for pointing it out!
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Tue, 24 Feb 2026 15:41:48 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
Hi Harry, On Tue, Feb 24, 2026 at 02:00:15PM +0900, Harry Yoo wrote: [ 16.162422] Oops: general protection fault, probably for non-canonical address 0xdead000000000110: 0000 [#1] SMP NOPTI [ 16.162426] CPU: 44 UID: 0 PID: 908 Comm: (udev-worker) Not tainted 6.19.0-rc5_master+ #116 PREEMPT(lazy) [ 16.162429] Ha...
{ "author": "Ming Lei <ming.lei@redhat.com>", "date": "Tue, 24 Feb 2026 17:07:18 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On 2/24/26 3:52 AM, Ming Lei wrote: So this function is not used in the sheaf refill path, but in the fallback slowpath when alloc_from_pcs() fastpath fails. And I'd guess alloc_from_pcs() fails because in __pcs_replace_empty_main() we have gfpflags_allow_blocking() false, because mempool_alloc_noprof() tries the fi...
{ "author": "Vlastimil Babka <vbabka@suse.cz>", "date": "Tue, 24 Feb 2026 21:27:40 +0100", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Tue, Feb 24, 2026 at 09:27:40PM +0100, Vlastimil Babka wrote: Good point. That's very good point. I was missing that aspect. Me neither :) Probably, yes. Sounds fair. I think your point is valid. Let's give it a try. Yeah :) let's first see how it performs after addressing your point. -- Cheers, Harry...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Wed, 25 Feb 2026 14:24:51 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Tue, Feb 24, 2026 at 05:07:18PM +0800, Ming Lei wrote: For this problem, I have a hypothesis which is inspired by a comment in the patch "slab: remove cpu (partial) slabs usage from allocation paths": /* * get a single object from the slab. This might race against __slab_free(), * which however has to take the l...
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Wed, 25 Feb 2026 13:32:36 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 01:32:36PM +0800, Hao Li wrote: If CPU1 observes was_full == 1, it should spin on n->list_lock and wait for CPU0 to release the lock. And CPU0 will remove the slab from the partial list before releasing the lock. Or am I missing something? Not sure how the scenario you describe could happen: ...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Wed, 25 Feb 2026 15:54:06 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 03:54:06PM +0900, Harry Yoo wrote: In __slab_free, if was_full == 1, then the condition !(IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL) && was_full) becomes false, so it won't enter the "if" block and therefore n->list_lock is not acquired. Does that sound right. -- Thanks, Hao
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Wed, 25 Feb 2026 15:06:46 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 03:06:46PM +0800, Hao Li wrote: Nah, you're right. Just slipped my mind. No need to acquire the lock if it was full, because that means it's not on the partial list. Hmm... but the logic has been there for very long time. Looks like we broke a premise for the percpu slab caching layer to work...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Wed, 25 Feb 2026 16:19:41 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 04:19:41PM +0900, Harry Yoo wrote: Exactly. Yes. I feel it's not a big issue. I think the root cause of this issue is as follows: Before this commit, get_partial_node would first remove the slab from the node list and then return the slab to the upper layer for freezing and object allocat...
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Wed, 25 Feb 2026 16:19:49 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 04:19:41PM +0900, Harry Yoo wrote: "because it's not on the partial list, and SLUB is going to add it to the percpu partial slab list (to avoid acquiring the lock)" Just elaborating the analysis a bit: Hao Li (thankfully!) analyzed that there's a race condition between 1) alloc path removes...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Wed, 25 Feb 2026 17:21:01 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 04:19:49PM +0800, Hao Li wrote: Right, that's an important point. Just realized that while elaborating the analysis :), there was a race condition between you and I! Right. Right. Exactly. Hmm but if that affects the performance (by always acquiring n->list_lock), the result is probably...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Wed, 25 Feb 2026 17:41:15 +0900", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On 2/24/26 21:27, Vlastimil Babka wrote: Could you try this then, please? Thanks! ----8<---- From: "Vlastimil Babka (SUSE)" <vbabka@kernel.org> Date: Wed, 25 Feb 2026 09:40:22 +0100 Subject: [PATCH] mm/slab: allow sheaf refill if blocking is not allowed Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org> --- m...
{ "author": "\"Vlastimil Babka (SUSE)\" <vbabka@kernel.org>", "date": "Wed, 25 Feb 2026 09:45:03 +0100", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 05:41:15PM +0900, Harry Yoo wrote: Haha, true race condition - we both sent emails within a minute :D Indeed. Let's look forward to the test results for Vlastimil's patch! -- Thanks, Hao
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Wed, 25 Feb 2026 16:54:21 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
Hi Vlastimil, On Wed, Feb 25, 2026 at 09:45:03AM +0100, Vlastimil Babka (SUSE) wrote: Thanks for working on this issue! Unfortunately the patch doesn't make a difference on IOPS in the perf test, follows the collected perf profile on linus tree(basically 7.0-rc1 with your patch): ``` 04cb971e2d28 (HEAD -> master) m...
{ "author": "Ming Lei <ming.lei@redhat.com>", "date": "Wed, 25 Feb 2026 17:31:26 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On 2/25/26 10:31, Ming Lei wrote: Hm that's weird, still the slowpath is prominent in your profile. I followed your reproducer instructions, although only with a small virtme-ng based setup. What's the output of "numactl -H" on yours, btw? Anyway what I saw is my patch raised the IOPS substantially, and with CONFIG_...
{ "author": "\"Vlastimil Babka (SUSE)\" <vbabka@kernel.org>", "date": "Wed, 25 Feb 2026 12:29:26 +0100", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Wed, Feb 25, 2026 at 12:29:26PM +0100, Vlastimil Babka (SUSE) wrote: available: 8 nodes (0-7) node 0 cpus: 0 1 2 3 32 33 34 35 node 0 size: 0 MB node 0 free: 0 MB node 1 cpus: 4 5 6 7 36 37 38 39 node 1 size: 31906 MB node 1 free: 30572 MB node 2 cpus: 8 9 10 11 40 41 42 43 node 2 size: 0 MB node 2 free: 0 MB node ...
{ "author": "Ming Lei <ming.lei@redhat.com>", "date": "Wed, 25 Feb 2026 20:24:22 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On 2/25/26 13:24, Ming Lei wrote: Oh right, memory-less nodes, of course. Always so much fun. Yean, no slowpath allocations from cpus that are *not* on a memoryless node. Thanks, that will help to focus what to look at.
{ "author": "\"Vlastimil Babka (SUSE)\" <vbabka@kernel.org>", "date": "Wed, 25 Feb 2026 14:22:48 +0100", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On 2/25/26 10:31, Ming Lei wrote: what about this patch in addition to the previous one? Thanks. ----8<---- From: "Vlastimil Babka (SUSE)" <vbabka@kernel.org> Date: Thu, 26 Feb 2026 18:59:56 +0100 Subject: [PATCH] mm/slab: put barn on every online node Including memoryless nodes. Signed-off-by: Vlastimil Babka (SUS...
{ "author": "\"Vlastimil Babka (SUSE)\" <vbabka@kernel.org>", "date": "Thu, 26 Feb 2026 19:02:11 +0100", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello Vlastimil and MM guys, The SLUB "sheaves" series merged via 815c8e35511d ("Merge branch 'slab/for-7.0/sheaves' into slab/for-next") introduces a severe performance regression for workloads with persistent cross-CPU alloc/free patterns. ublk null target benchmark IOPS drops significantly compared to v6.19: from ~...
null
null
null
[Regression] mm:slab/sheaves: severe performance regression in cross-CPU slab allocation
On Thu, Feb 26, 2026 at 07:02:11PM +0100, Vlastimil Babka (SUSE) wrote: With the two patches, IOPS increases to 22M from 13M, but still much less than 36M which is obtained in v6.19-rc5, and slab-sheave PR follows v6.19-rc5. Also alloc_slowpath can't be observed any more. Follows perf profile with the two patches: ...
{ "author": "Ming Lei <ming.lei@redhat.com>", "date": "Fri, 27 Feb 2026 17:23:40 +0800", "is_openbsd": false, "thread_id": "aaFinIsCmitHSP_c@fedora.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Thu, Feb 26, 2026 at 03:35:08PM +0900, Harry Yoo wrote: It doesn't? Then how does the slab allocator guarantee that two different CPUs won't try to perform allocations or deallocations from the same slab at the same time, messing everything up? Can you explain how this is meant to work, for those of us who don'...
{ "author": "Alan Stern <stern@rowland.harvard.edu>", "date": "Thu, 26 Feb 2026 10:45:55 -0500", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Thu, Feb 26, 2026 at 10:45:55AM -0500, Alan Stern wrote: Ah, alloc/free slowpaths do use cmpxchg128 or spinlock and don't mess things up. But fastpath allocs/frees are served from percpu array that is protected by a local_lock. local_lock has a compiler barrier in it, but that's not enough. -- Cheers, Harry / H...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Fri, 27 Feb 2026 01:17:52 +0900", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Fri, Feb 27, 2026 at 01:17:52AM +0900, Harry Yoo wrote: If those things rely on a percpu array, how can one CPU possibly manipulate a resource (slab or something else) that was changed by a different CPU? The whole point of percpu data structures is that each CPU gets its own copy. Alan Stern
{ "author": "Alan Stern <stern@rowland.harvard.edu>", "date": "Thu, 26 Feb 2026 11:42:02 -0500", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Thu, Feb 26, 2026 at 11:42:02AM -0500, Alan Stern wrote: AFAICT that shouldn't happen within the slab allocator. Exactly. But I'm not talking about what happens within the allocator, but rather, about what slab expects to happen outside the allocator. Something like this: CPU X CPU Y ptr = kmalloc(); WRITE_...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Fri, 27 Feb 2026 02:11:49 +0900", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Fri, Feb 27, 2026 at 02:11:49AM +0900, Harry Yoo wrote: I understand. Yes, you have made that quite clear. But you're missing _my_ point. Which is: The same mechanism that the slab allocator uses to ensure that CPU X and CPU Y won't step on each other's toes if they both run kmalloc/kfree at the same time sho...
{ "author": "Alan Stern <stern@rowland.harvard.edu>", "date": "Thu, 26 Feb 2026 13:06:51 -0500", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Fri, 27 Feb 2026, Harry Yoo wrote: Well if objects are coming from different folios then that is an issue. The prior slub approach had no per cpu linked lists and restricted allocations to the objects of a single page that was only used by a specific cpu. locks were used when that page changed. There was no need ...
{ "author": "\"Christoph Lameter (Ampere)\" <cl@gentwo.org>", "date": "Thu, 26 Feb 2026 09:59:26 -0800 (PST)", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Fri, Feb 27, 2026 at 01:17:52AM +0900, Harry Yoo wrote: Hmm, this memory-ordering issue is indeed pretty mind-bending. I'd like to share a few thoughts as well. Happy to be corrected! For our current problem, I think the key lies in the relative ordering between the two variables, stride and obj_exts. To address i...
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Fri, 27 Feb 2026 16:06:37 +0800", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Fri, Feb 27, 2026 at 04:06:37PM +0800, Hao Li wrote: Yeah, it's indeed confusing :) Yes, that's a somewhat expensive way to avoid the problem by enforcing ordering between these two variables. While obj_exts still can be set concurrently (via cmpxchg()), if we set stride very early during slab initialization, by...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Fri, 27 Feb 2026 18:03:23 +0900", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
Hi, A comment from a LKMM reviewer. On Thu, 26 Feb 2026 15:35:08 +0900, Harry Yoo wrote: So in [7], you moved slab_set_stride() before the store to slab->obj_exts. Both of them are done without any markers for racy memory accesses. Moving plain stores around in the C source code does have no effect WRT the ordering...
{ "author": "Akira Yokosawa <akiyks@gmail.com>", "date": "Fri, 27 Feb 2026 18:14:24 +0900", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hello, SLAB, LKMM, and KCSAN folks! I'd like to discuss slab's assumption on users regarding memory ordering. Recently, I've been investigating an interesting slab memory ordering issue [3] [4] in v7.0-rc1, which made me think about memory ordering for slab objects. But without answering "What does slab expect users...
null
null
null
[BUG] Memory ordering between kmalloc() and kfree()? it's confusing!
On Thu, Feb 26, 2026 at 01:06:51PM -0500, Alan Stern wrote: Okay. Looks like I misread your point... Within the slab allocator, I believe there are sufficient mechanisms (either spinlock or cmpxchg) to prevent CPUs from interfereing with each other. My earlier statement "Because the slab allocator itself doesn't gu...
{ "author": "Harry Yoo <harry.yoo@oracle.com>", "date": "Fri, 27 Feb 2026 21:36:37 +0900", "is_openbsd": false, "thread_id": "463aaac5-d36d-46b6-91c3-b62994f0528d@gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
From: Khalid Aziz <khalid@kernel.org> Add a pseudo filesystem that contains files and page table sharing information that enables processes to share page table entries. This patch adds the basic filesystem that can be mounted, a CONFIG_MSHARE option to enable the feature, and documentation. Signed-off-by: Khalid Aziz...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:03:54 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
When a new file is created under msharefs, allocate a new mm_struct to be associated with it for the lifetime of the file. The mm_struct will hold the VMAs and pagetables for the mshare region the file represents. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> ...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:03:57 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Add file and inode operations to allow the size of an mshare region to be set fallocate() or ftruncate(). Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- mm/mshare.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 86 insertions(+), 1 deletion(-) diff --git a/mm/mshare.c b/mm...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:03:58 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
From: Khalid Aziz <khalid@kernel.org> An mshare region contains zero or more actual vmas that map objects in the mshare range with shared page tables. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> -...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:03:59 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
From: Khalid Aziz <khalid@kernel.org> Add support for mapping an mshare region into a process after the region has been established in msharefs. Disallow operations that could split the resulting msharefs vma such as partial unmaps and protection changes. Fault handling, mapping, unmapping, and protection changes for ...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:00 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Scanning an msharefs vma results in changes to the shared page table but with TLB flushes incorrectly only going to the process with the vma. Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- kernel/sched/fair.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/ke...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:02 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
This will be used to support mshare functionality where the read lock on an mshare host mm is taken while holding the lock on a process mm. Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- include/linux/mmap_lock.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/mmap_lock.h b/inc...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:03 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Special handling is needed when unmapping a hugetlb vma and will be needed when unmapping an msharefs vma once support is added for handling faults in an mshare region. Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- include/linux/mm.h | 10 ++++++++++ ipc/shm.c | 17 +++++++++++++++++ mm/huget...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:04 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Once an mshare shared page table has been linked with one or more process page tables it becomes necessary to ensure that the shared page table is not completely freed when objects in it are unmapped in order to avoid a potential UAF bug. To do this, introduce and use a reference count for PUD pages. Signed-off-by: An...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:05 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Unlike the mm of a task, an mshare host mm is not updated on context switch. In particular this means that mm_cpumask is never updated which results in TLB flushes for updates to mshare PTEs only being done on the local CPU. To ensure entries are flushed for non-local TLBs, set up an mmu notifier on the mshare mm and u...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:01 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Enable x86 support for handling page faults in an mshare region by redirecting page faults to operate on the mshare mm_struct and vmas contained in it. Some permissions checks are done using vma flags in architecture-specfic fault handling code so the actual vma needed to complete the handling is acquired before callin...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:07 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
In preparation for mapping objects into an mshare region, create __do_mmap() to allow mapping into a specified mm. There are no functional changes otherwise. Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- include/linux/mm.h | 16 ++++++++++++++++ mm/mmap.c | 10 +++++----- mm/vma.c |...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:08 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Khalid Aziz <khalid@kernel.org> Users of mshare need to know the size and alignment requirement for shared regions. Pre-populate msharefs with a file, mshare_info, that provides this information. For now, pagetable sharing is hardcoded to be at the PUD level. Signed-off-by: Khalid Aziz <khalid@kernel.org> Signe...
null
null
null
[PATCH v3 02/22] mm/mshare: pre-populate msharefs with information file
Allow unmap to work with an mshare host mm. Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- mm/vma.c | 10 ++++++---- mm/vma.h | 1 + 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index a7fbd339d259..c09b2e1a08e6 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -1265,7 +1265...
{ "author": "Anthony Yznaga <anthony.yznaga@oracle.com>", "date": "Tue, 19 Aug 2025 18:04:09 -0700", "is_openbsd": false, "thread_id": "CAC_TJvdC+CSqvx+BvOv4gO2mJbwiBhb6OZO0sx=GXQ0CmA853g@mail.gmail.com.mbox.gz" }