data_type
large_stringclasses
3 values
source
large_stringclasses
29 values
code
large_stringlengths
98
49.4M
filepath
large_stringlengths
5
161
message
large_stringclasses
234 values
commit
large_stringclasses
234 values
subject
large_stringclasses
418 values
critique
large_stringlengths
101
1.26M
metadata
dict
lkml_critique
linux-mm
Hi, The recent introduction of heaps in the optee driver [1] made possible the creation of heaps as modules. It's generally a good idea if possible, including for the already existing system and CMA heaps. The system one is pretty trivial, the CMA one is a bit more involved, especially since we have a call from kern...
null
null
null
[PATCH 0/7] dma-buf: heaps: Turn heaps into modules
Hi John, On Thu, Feb 26, 2026 at 10:03:21AM -0800, John Stultz wrote: Understood, thanks :) It looks like there's some people interested in doing what you described though, so we might need your patch still. Maxime
{ "author": "Maxime Ripard <mripard@kernel.org>", "date": "Fri, 27 Feb 2026 14:30:29 +0100", "is_openbsd": false, "thread_id": "20260227-psychedelic-tireless-herring-0adfa9@houat.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Fri, Feb 20, 2026 at 01:25:33AM +0800, Kairui Song wrote: I would be very interested in discussing this topic as well. Are you referring to refaults on the page cache side, or swapins? Last time we evaluated MGLRU on Meta workloads, we noticed that it tends to do better with zswap, but worse with disk swap. It s...
{ "author": "Johannes Weiner <hannes@cmpxchg.org>", "date": "Fri, 20 Feb 2026 13:24:26 -0500", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Sat, Feb 21, 2026 at 2:24 AM Johannes Weiner <hannes@cmpxchg.org> wrote: Thanks, glad to hear that! A bit more than that. When there is no swap, MGLRU still performs worse in some workloads like MongoDB. From what I've noticed that's because the PID protection is a bit too passive, and there is a force protection...
{ "author": "Kairui Song <ryncsn@gmail.com>", "date": "Sat, 21 Feb 2026 14:03:43 +0800", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Thu, Feb 19, 2026 at 9:26 AM Kairui Song <ryncsn@gmail.com> wrote: Hi Kairui, I would be very interested in joining this discussion at LSF/MM. We use MGLRU on Android. While the reduced CPU usage leads to power improvements for mobile devices, we've run into a few notable issues as well. Off the top of my head: ...
{ "author": "Kalesh Singh <kaleshsingh@google.com>", "date": "Wed, 25 Feb 2026 17:55:01 -0800", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Thu, Feb 26, 2026 at 9:55 AM Kalesh Singh <kaleshsingh@google.com> wrote: Hi Kelash, Glad to discuss this with you. Yes, this is one of the main issues for us too. Per our observation one cause for that is MGLRU's usage of flags like PG_workingset is different from active / inactive LRU, and flags like the PG_wo...
{ "author": "Kairui Song <ryncsn@gmail.com>", "date": "Thu, 26 Feb 2026 11:06:46 +0800", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
Hi Kairui, hi Kalesh, Yes, we’re interested in this work. We see file pages being under-protected in smartphone workload, and an LFU-like approach sounds promising to better promote and protect hot file pages. Kairui has shared the patches; we’ll backport them to our tree and report back once we have results f...
{ "author": "wangzicheng <wangzicheng@honor.com>", "date": "Thu, 26 Feb 2026 10:10:59 +0000", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Fri, Feb 20, 2026 at 01:25:33AM +0800, Kairui Song wrote: To my mind, the biggest problem with MGLRU is that Google dumped it on us and ran away. Commit 44958000bada claimed that it was now maintained and added three people as maintainers. In the six months since that commit, none of those three people have any c...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Thu, 26 Feb 2026 15:54:22 +0000", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
I guess not—MGLRU needs at least two generations to function, similar to active and inactive lists, meaning it requires two lists. You Zhao mentioned this in commit ec1c86b25f4b: "This protocol, AKA second chance, requires a minimum of two generations, hence MIN_NR_GENS." But I do feel the issue is that anon and file...
{ "author": "Barry Song <21cnbao@gmail.com>", "date": "Fri, 27 Feb 2026 11:30:13 +0800", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
Hi Matthew, Can we keep it for now? Kairui, Zicheng, and I are working on it. approach after applying a few vendor hooks on Android, such as forced aging and avoiding direct activation of read-ahead folios during page faults, among others. To be honest, performance was worse than active/inactive without those ...
{ "author": "Barry Song <21cnbao@gmail.com>", "date": "Fri, 27 Feb 2026 12:31:39 +0800", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Fri, 20 Feb 2026, Kairui Song wrote: I think this would be a very useful topic to discuss and I really like how this was framed in the context of what needs to be addressed so that MGLRU can be on a path to becoming the default implementation and we can eliminate two separate implementations. Yes, MGLRU can fo...
{ "author": "David Rientjes <rientjes@google.com>", "date": "Thu, 26 Feb 2026 23:11:11 -0800 (PST)", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
Hi All, Apologies I forgot to add the proper tag in the previous email so resending this. MGLRU has been introduced in the mainline for years, but we still have two LRUs today. There are many reasons MGLRU is still not the only LRU implementation in the kernel. And I've been looking at a few major issues here: 1. P...
null
null
null
[LSF/MM/BPF TOPIC] Improving MGLRU
On Fri, Feb 20, 2026 at 01:25:33AM +0800, Kairui Song wrote: Hi Kairui, I would be very interested in discussing this topic as well. In Linux desktop distributions, when the system rapidly enters low memory state, it is almost impossible to enter S4, the success rate only is 10%. When analyzing this issue, it was id...
{ "author": "Vernon Yang <vernon2gm@gmail.com>", "date": "Fri, 27 Feb 2026 18:29:38 +0800", "is_openbsd": false, "thread_id": "f4c9b715-be7a-4587-90fa-97f6b72938eb@gmail.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
All the zsmalloc functions that operate on a zsmalloc object (encoded location values) are named "zs_obj_xxx", except for zs_object_copy. Rename zs_object_copy to zs_obj_copy to conform to the pattern. No functional changes intended. Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com> --- mm/zsmalloc.c | 4 ++-- 1 ...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:24 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
object indices, which describe the location of an object in a zspage, cannot be negative. To reflect this most helpers calculate and return these values as unsigned ints. Convert find_alloced_obj, the only function that calculates obj_idx as a signed int, to use an unsigned int as well. No functional change intended....
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:25 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Introduce an array of struct obj_cgroup pointers to zpdesc to keep track of compressed objects' memcg ownership. The 8 bytes required to add the array in struct zpdesc brings its size up from 56 bytes to 64 bytes. However, in the current implementation, struct zpdesc lays on top of struct page[1]. This allows the incr...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:26 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
With each zswap-backing zpdesc now having an array of obj_cgroup pointers, plumb the obj_cgroup pointer from the zswap / zram layer down to zsmalloc. Introduce two helper functions zpdesc_obj_cgroup and zpdesc_set_obj_cgroup, which abstract the conversion of an object's zspage idx to its zpdesc idx and the retrieval o...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:27 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Now that obj_cgroups are tracked in zpdesc, redirect the zswap layer to use the pointer stored in the zpdesc and remove the pointer in struct zswap_entry. This offsets the temporary memory increase caused by the duplicate storage of the obj_cgroup pointer and results in a net zero memory footprint change. The lifetime...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:28 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Now that zswap_entries do not directly track obj_cgroups of the entries, handle the lifetime management and charging of these entries into the zsmalloc layer. One functional change is that zswap entries are now no longer accounted by the size of the compressed object, but by the size of the size_class slot they occupy...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:29 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Zswap compresses and uncompresses in PAGE_SIZE units, which simplifies the accounting for how much memory it has compressed. However, when a compressed object is stored at the boundary of two zspages, accounting at PAGE_SIZE units makes it difficult to fractionally charge each backing zspage with the ratio of memory it...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:30 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Now that memcg charging happens in the zsmalloc layer where we have both objcg and page information, we can specify which node's memcg lruvec zswapped memory should be accounted to. Move MEMCG_ZSWAP_B and MEMCG_ZSWAPPED_B from enum_node_stat_item to int memcg_node_stat_items. Rename their prefix from MEMCG to NR to re...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 11:29:31 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
On Thu, Feb 26, 2026 at 11:29:26AM -0800, Joshua Hahn wrote: Why not just strore struct obj_cgroup ** instead of unsigned long? You will not need to do conversions when storing or accessing.
{ "author": "Shakeel Butt <shakeel.butt@linux.dev>", "date": "Thu, 26 Feb 2026 13:37:42 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
On Thu, 26 Feb 2026 13:37:42 -0800 Shakeel Butt <shakeel.butt@linux.dev> wrote: Hello Shakeel, I hope you're doing well! Yeah, that makes sense to me : -) I guess if we're going to be accessing it with the zpdesc_objcgs and zpdesc_set_objcgs helpers anyways, we can abstract away the casting to unsigned long and re-...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 13:43:37 -0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Hi Joshua, kernel test robot noticed the following build errors: [auto build test ERROR on axboe/for-next] [also build test ERROR on linus/master v7.0-rc1] [cannot apply to akpm-mm/mm-everything next-20260226] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we sugges...
{ "author": "kernel test robot <lkp@intel.com>", "date": "Fri, 27 Feb 2026 06:40:18 +0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Hi Joshua, kernel test robot noticed the following build errors: [auto build test ERROR on axboe/for-next] [also build test ERROR on linus/master v7.0-rc1] [cannot apply to akpm-mm/mm-everything next-20260226] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we sugges...
{ "author": "kernel test robot <lkp@intel.com>", "date": "Fri, 27 Feb 2026 07:02:31 +0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
INTRODUCTION ============ The current design for zswap and zsmalloc leaves a clean divide between layers of the memory stack. At the higher level, we have zswap, which interacts directly with memory consumers, compression algorithms, and handles memory usage accounting via memcg limits. At the lower level, we have zsma...
null
null
null
[PATCH 0/8] mm/zswap, zsmalloc: Per-memcg-lruvec zswap accounting
Hi Joshua, kernel test robot noticed the following build errors: [auto build test ERROR on axboe/for-next] [also build test ERROR on linus/master v7.0-rc1 next-20260226] [cannot apply to akpm-mm/mm-everything] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we sugges...
{ "author": "kernel test robot <lkp@intel.com>", "date": "Fri, 27 Feb 2026 07:13:12 +0800", "is_openbsd": false, "thread_id": "202602270738.SxqPEs3Q-lkp@intel.com.mbox.gz" }
lkml_critique
linux-mm
From: Barry Song <baohua@kernel.org> MGLRU activates folios when a new folio is added and lru_gen_in_fault() returns true. The problem is that when a page fault occurs at address N, readahead may bring in many folios around N, and those folios are also activated even though many of them may never be accessed. A previ...
null
null
null
[PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped
From: Barry Song <baohua@kernel.org> MGLRU activates folios when a new folio is added and lru_gen_in_fault() returns true. The problem is that when a page fault occurs at address N, readahead may bring in many folios around N, and those folios are also activated even though many of them may never be accessed. A previ...
{ "author": "Barry Song <21cnbao@gmail.com>", "date": "Thu, 26 Feb 2026 11:37:12 +1300", "is_openbsd": false, "thread_id": "CAGsJ_4wV=OpV-ntZQGKQawrO0kemwdT8byySBCCZiOOgugcQtw@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Barry Song <baohua@kernel.org> MGLRU activates folios when a new folio is added and lru_gen_in_fault() returns true. The problem is that when a page fault occurs at address N, readahead may bring in many folios around N, and those folios are also activated even though many of them may never be accessed. A previ...
null
null
null
[PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped
Hi Barry, Setting only non-filelru-folio in folio_add_lru looks reasonable and should help with over-protecting readahead pages that are never actually accessed. For our workloads that already suffer from file under-protection, we see two sides here: on the positive side, keeping only actually-used readahead pages in...
{ "author": "wangzicheng <wangzicheng@honor.com>", "date": "Thu, 26 Feb 2026 12:57:42 +0000", "is_openbsd": false, "thread_id": "CAGsJ_4wV=OpV-ntZQGKQawrO0kemwdT8byySBCCZiOOgugcQtw@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
From: Barry Song <baohua@kernel.org> MGLRU activates folios when a new folio is added and lru_gen_in_fault() returns true. The problem is that when a page fault occurs at address N, readahead may bring in many folios around N, and those folios are also activated even though many of them may never be accessed. A previ...
null
null
null
[PATCH RFC] mm/mglru: lazily activate folios while folios are really mapped
Hi Zicheng, On Thu, Feb 26, 2026 at 8:57 PM wangzicheng <wangzicheng@honor.com> wrote: [...] Right, the fundamental principle of LRU is to place cold pages at the tail, not at the head, making cold pages easier to reclaim and hot pages harder to reclaim. I find your concern a bit surprising. If I understand correct...
{ "author": "Barry Song <21cnbao@gmail.com>", "date": "Fri, 27 Feb 2026 08:15:50 +0800", "is_openbsd": false, "thread_id": "CAGsJ_4wV=OpV-ntZQGKQawrO0kemwdT8byySBCCZiOOgugcQtw@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
Do not silently autocorrect bad recompression priority parameter value and just error out. Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- drivers/block/zram/zram_drv.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a...
null
null
null
[PATCH 1/5] zram: do not autocorrect bad recompression parameters
It's not entirely correct to use ->num_active_comps for max-prio limit, as ->num_active_comps just tells the number of configured algorithms, not the max configured priority. For instance, in the following theoretical example: [lz4] [nil] [nil] [deflate] ->num_active_comps is 2, while the actual max-prio is 3. ...
{ "author": "Sergey Senozhatsky <senozhatsky@chromium.org>", "date": "Fri, 27 Feb 2026 17:21:08 +0900", "is_openbsd": false, "thread_id": "eb7cd3ca578320be9aff13e71298fc36e110af41.1772180459.git.senozhatsky@chromium.org.mbox.gz" }
lkml_critique
linux-mm
Do not silently autocorrect bad recompression priority parameter value and just error out. Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- drivers/block/zram/zram_drv.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a...
null
null
null
[PATCH 1/5] zram: do not autocorrect bad recompression parameters
Recompression algorithm lookup by name is ambiguous and can lead to unexpected results. The problem is that the system can configure the same algorithm but with different parameters (compression level, C/D-dicts, etc.) multiple times: [zstd clevel=3] [zstd clevel=8 dict=/etc/dict] making it impossible to disting...
{ "author": "Sergey Senozhatsky <senozhatsky@chromium.org>", "date": "Fri, 27 Feb 2026 17:21:09 +0900", "is_openbsd": false, "thread_id": "eb7cd3ca578320be9aff13e71298fc36e110af41.1772180459.git.senozhatsky@chromium.org.mbox.gz" }
lkml_critique
linux-mm
Do not silently autocorrect bad recompression priority parameter value and just error out. Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- drivers/block/zram/zram_drv.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a...
null
null
null
[PATCH 1/5] zram: do not autocorrect bad recompression parameters
Emphasize usage of the `priority` parameter for recompression and explain why `algo` parameter can lead to unexpected behavior and thus is not recommended. Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- Documentation/admin-guide/blockdev/zram.rst | 40 ++++++++++----------- 1 file changed, 18 insert...
{ "author": "Sergey Senozhatsky <senozhatsky@chromium.org>", "date": "Fri, 27 Feb 2026 17:21:10 +0900", "is_openbsd": false, "thread_id": "eb7cd3ca578320be9aff13e71298fc36e110af41.1772180459.git.senozhatsky@chromium.org.mbox.gz" }
lkml_critique
linux-mm
Do not silently autocorrect bad recompression priority parameter value and just error out. Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- drivers/block/zram/zram_drv.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a...
null
null
null
[PATCH 1/5] zram: do not autocorrect bad recompression parameters
Chained recompression has unpredictable behavior and is not useful in practice. First, systems usually configure just one alternative recompression algorithm, which has slower compression/decompression but better compression ratio. A single alternative algorithm doesn't need chaining. Second, even with multiple reco...
{ "author": "Sergey Senozhatsky <senozhatsky@chromium.org>", "date": "Fri, 27 Feb 2026 17:21:11 +0900", "is_openbsd": false, "thread_id": "eb7cd3ca578320be9aff13e71298fc36e110af41.1772180459.git.senozhatsky@chromium.org.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove remaining forward declarations and change __folio_batch_release()'s declaration to match its definition. Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: David Hildenbrand (Arm) <david@kernel.org> Acked-by: Chri...
{ "author": "Tal Zussman <tz2294@columbia.edu>", "date": "Wed, 25 Feb 2026 18:44:25 -0500", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
Remove unused pagevec.h includes from .c files. These were found with the following command: grep -rl '#include.*pagevec\.h' --include='*.c' | while read f; do grep -qE 'PAGEVEC_SIZE|folio_batch' "$f" || echo "$f" done There are probably more removal candidates in .h files, but those are more complex to analyz...
{ "author": "Tal Zussman <tz2294@columbia.edu>", "date": "Wed, 25 Feb 2026 18:44:26 -0500", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
struct pagevec no longer exists. Rename the macro appropriately. Signed-off-by: Tal Zussman <tz2294@columbia.edu> --- fs/btrfs/extent_io.c | 4 ++-- include/linux/folio_batch.h | 6 +++--- include/linux/folio_queue.h | 6 +++--- mm/shmem.c | 4 ++-- mm/swap.c | 2 +- mm/swap_...
{ "author": "Tal Zussman <tz2294@columbia.edu>", "date": "Wed, 25 Feb 2026 18:44:28 -0500", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Rename include/linux/pagevec.h to reflect reality and update includes tree-wide. Add the new filename to MAINTAINERS explicitly, as it no longer matches the "include/linux/page[-_]*" pattern in MEMORY MANAGEMENT - CORE. Signed-off-by: Tal...
{ "author": "Tal Zussman <tz2294@columbia.edu>", "date": "Wed, 25 Feb 2026 18:44:27 -0500", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On Wed, 25 Feb 2026 18:44:24 -0500 Tal Zussman <tz2294@columbia.edu> wrote: Dang that's a lot of cc's ;) Thanks, I'll add this series to mm.git's mm-new branch.
{ "author": "Andrew Morton <akpm@linux-foundation.org>", "date": "Wed, 25 Feb 2026 16:41:44 -0800", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On Wed 25-02-26 18:44:26, Tal Zussman wrote: If it compiles than it's nice to get rid of. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Thu, 26 Feb 2026 14:12:51 +0100", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On Wed 25-02-26 18:44:28, Tal Zussman wrote: Looks good. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Thu, 26 Feb 2026 14:14:12 +0100", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On Wed 25-02-26 18:44:27, Tal Zussman wrote: Looks good. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
{ "author": "Jan Kara <jack@suse.cz>", "date": "Thu, 26 Feb 2026 14:14:24 +0100", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On 2/26/26 00:44, Tal Zussman wrote: Acked-by: David Hildenbrand (Arm) <david@kernel.org> -- Cheers, David
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Thu, 26 Feb 2026 14:18:58 +0100", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On 25 Feb 2026, at 18:44, Tal Zussman wrote: LGTM. Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 26 Feb 2026 16:14:39 -0500", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On 25 Feb 2026, at 18:44, Tal Zussman wrote: Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 26 Feb 2026 16:23:03 -0500", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
struct pagevec was removed in commit 1e0877d58b1e ("mm: remove struct pagevec"). Remove any stray references to it and rename relevant files and macros accordingly. While at it, remove unnecessary #includes of pagevec.h (now folio_batch.h) in .c files. There are probably more of these that could be removed in .h files...
null
null
null
[PATCH v2 0/4] mm: Remove stray references to pagevec
On Wed, Feb 25, 2026 at 3:44 PM Tal Zussman <tz2294@columbia.edu> wrote: Acked-by: Chris Li <chrisl@kernel.org> Chris
{ "author": "Chris Li <chrisl@kernel.org>", "date": "Thu, 26 Feb 2026 13:48:47 -0800", "is_openbsd": false, "thread_id": "CACePvbX5Qm+kQLtCWynvO-2YtoW0mdR+V6rfq=buR6tfR1A9FQ@mail.gmail.com.mbox.gz" }
lkml_critique
linux-mm
As work on Address Space Isolation [0] trudges slowly along (next series coming soon™... I promise... some details of the plan are in [0]) I've been running into a common issue whenever I try to do new stuff with the kernel address space: We have too many sets of pagetable manipulation routines, and yet we don't have o...
null
null
null
[LSF/MM/BPF TOPIC] A pagetable library for the kernel?
On Thu, Feb 19, 2026 at 05:51:09PM +0000, Brendan Jackman wrote: By and large, lots of functionality that deals with kernel page tables was added ad-hoc, like e.g. adopting set_memory() designed for DEBUG_PAGE_ALLOC for protecting kernel and modules code. I think it's a good idea to have a generic abstraction that ca...
{ "author": "Mike Rapoport <rppt@kernel.org>", "date": "Mon, 23 Feb 2026 13:28:15 +0200", "is_openbsd": false, "thread_id": "aaDd3Rth4RLndjvn@google.com.mbox.gz" }
lkml_critique
linux-mm
As work on Address Space Isolation [0] trudges slowly along (next series coming soon™... I promise... some details of the plan are in [0]) I've been running into a common issue whenever I try to do new stuff with the kernel address space: We have too many sets of pagetable manipulation routines, and yet we don't have o...
null
null
null
[LSF/MM/BPF TOPIC] A pagetable library for the kernel?
On Mon Feb 23, 2026 at 11:28 AM UTC, Mike Rapoport wrote: That makes sense. I've also just posted an RFC that does more awkward ad-hoc manipulation: https://lore.kernel.org/all/20260225-page_alloc-unmapped-v1-4-e8808a03cd66@google.com/ This might help illustrate the kinda thing that we could benefit from with a m...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 17:06:02 +0000", "is_openbsd": false, "thread_id": "aaDd3Rth4RLndjvn@google.com.mbox.gz" }
lkml_critique
linux-mm
As work on Address Space Isolation [0] trudges slowly along (next series coming soon™... I promise... some details of the plan are in [0]) I've been running into a common issue whenever I try to do new stuff with the kernel address space: We have too many sets of pagetable manipulation routines, and yet we don't have o...
null
null
null
[LSF/MM/BPF TOPIC] A pagetable library for the kernel?
On Thu, Feb 19, 2026 at 05:51:09PM +0000, Brendan Jackman wrote: Hello Brendan, Thanks for sharing this! I this it's a great idea to introduce a library like this for the kernel page tables. I'm interested in participating in this discussion as well. Thanks, Isaac
{ "author": "Isaac Manjarres <isaacmanjarres@google.com>", "date": "Thu, 26 Feb 2026 15:57:17 -0800", "is_openbsd": false, "thread_id": "aaDd3Rth4RLndjvn@google.com.mbox.gz" }
lkml_critique
linux-mm
+linux-mm list, since the topic is interesting and worth having a record in the list. On 21 Feb 2026, at 20:07, Wei Yang wrote: > On Sun, Feb 22, 2026 at 01:04:25AM +0000, Wei Yang wrote: >> Hi, David & Zi Yan >> >> With some tests, I may find one refcount issue during __folio_split(), when >> >> * folio is isolated...
null
null
null
Re: A potential refcount issue during __folio_split
On Sat, Feb 21, 2026 at 10:00:44PM -0500, Zi Yan wrote: You mean I should put page_folio(lockat)? This is the error patch. After isolation, if folio mapping changes, it release the folio. So the folio is not split yet. For other case, it handles differently. See below. Yes, generally it is. But we seem not forbid ...
{ "author": "Wei Yang <richard.weiyang@gmail.com>", "date": "Sun, 22 Feb 2026 10:28:08 +0000", "is_openbsd": false, "thread_id": "20260227005129.ujpgdvjjyqpemzxy@master.mbox.gz" }
lkml_critique
linux-mm
+linux-mm list, since the topic is interesting and worth having a record in the list. On 21 Feb 2026, at 20:07, Wei Yang wrote: > On Sun, Feb 22, 2026 at 01:04:25AM +0000, Wei Yang wrote: >> Hi, David & Zi Yan >> >> With some tests, I may find one refcount issue during __folio_split(), when >> >> * folio is isolated...
null
null
null
Re: A potential refcount issue during __folio_split
Agreed. Maybe it would be even nicer if the split function could return the new folio directly. folio_get(folio); folio_lock(folio); split_folio = folio_split_XXX(folio, ..., tail_page, ...); if (IS_ERR_VALUE(split_folio)) { ... } folio_unlock(split_folio); folio_put(split__folio); -- Cheers, David
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Mon, 23 Feb 2026 10:23:11 +0100", "is_openbsd": false, "thread_id": "20260227005129.ujpgdvjjyqpemzxy@master.mbox.gz" }
lkml_critique
linux-mm
+linux-mm list, since the topic is interesting and worth having a record in the list. On 21 Feb 2026, at 20:07, Wei Yang wrote: > On Sun, Feb 22, 2026 at 01:04:25AM +0000, Wei Yang wrote: >> Hi, David & Zi Yan >> >> With some tests, I may find one refcount issue during __folio_split(), when >> >> * folio is isolated...
null
null
null
Re: A potential refcount issue during __folio_split
On Mon, Feb 23, 2026 at 10:23:11AM +0100, David Hildenbrand (Arm) wrote: Missed this. Agree. I am afraid it would be complicated? Well, we don't have this usecase now, could decide it when we do need it. -- Wei Yang Help you, Help me
{ "author": "Wei Yang <richard.weiyang@gmail.com>", "date": "Mon, 23 Feb 2026 11:59:48 +0000", "is_openbsd": false, "thread_id": "20260227005129.ujpgdvjjyqpemzxy@master.mbox.gz" }
lkml_critique
linux-mm
+linux-mm list, since the topic is interesting and worth having a record in the list. On 21 Feb 2026, at 20:07, Wei Yang wrote: > On Sun, Feb 22, 2026 at 01:04:25AM +0000, Wei Yang wrote: >> Hi, David & Zi Yan >> >> With some tests, I may find one refcount issue during __folio_split(), when >> >> * folio is isolated...
null
null
null
Re: A potential refcount issue during __folio_split
On 23 Feb 2026, at 6:59, Wei Yang wrote: The patch below should work, but for now, since we do not have any user, it is better to update the comment and add a check to make sure @lock_at always points to the head page if @list is not NULL. From 66e24e6cc4397caa134f5600d22d77fdb9b58049 Mon Sep 17 00:00:00 2001 From: ...
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Mon, 23 Feb 2026 23:00:01 -0500", "is_openbsd": false, "thread_id": "20260227005129.ujpgdvjjyqpemzxy@master.mbox.gz" }
lkml_critique
linux-mm
+linux-mm list, since the topic is interesting and worth having a record in the list. On 21 Feb 2026, at 20:07, Wei Yang wrote: > On Sun, Feb 22, 2026 at 01:04:25AM +0000, Wei Yang wrote: >> Hi, David & Zi Yan >> >> With some tests, I may find one refcount issue during __folio_split(), when >> >> * folio is isolated...
null
null
null
Re: A potential refcount issue during __folio_split
On Mon, Feb 23, 2026 at 11:00:01PM -0500, Zi Yan wrote: Agree. This makes me thing whether we need to always grab lru lock. If the folio has already been removed from lru, it looks not necessary? Well this is another thing. -- Wei Yang Help you, Help me
{ "author": "Wei Yang <richard.weiyang@gmail.com>", "date": "Tue, 24 Feb 2026 04:25:35 +0000", "is_openbsd": false, "thread_id": "20260227005129.ujpgdvjjyqpemzxy@master.mbox.gz" }
lkml_critique
linux-mm
+linux-mm list, since the topic is interesting and worth having a record in the list. On 21 Feb 2026, at 20:07, Wei Yang wrote: > On Sun, Feb 22, 2026 at 01:04:25AM +0000, Wei Yang wrote: >> Hi, David & Zi Yan >> >> With some tests, I may find one refcount issue during __folio_split(), when >> >> * folio is isolated...
null
null
null
Re: A potential refcount issue during __folio_split
On Mon, Feb 23, 2026 at 11:00:01PM -0500, Zi Yan wrote: Hi, Zi Yan Below is my draft change for the comment and check. If it looks good, I would like to send a formal patch. Look forward your opinion. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2dbb35accf4b..8047f00bfc2a 100644 --- a/mm/huge_memory.c +++...
{ "author": "Wei Yang <richard.weiyang@gmail.com>", "date": "Fri, 27 Feb 2026 00:51:29 +0000", "is_openbsd": false, "thread_id": "20260227005129.ujpgdvjjyqpemzxy@master.mbox.gz" }
lkml_critique
linux-mm
Zone lock contention can significantly impact allocation and reclaim latency, as it is a central synchronization point in the page allocator and reclaim paths. Improved visibility into its behavior is therefore important for diagnosing performance issues in memory-intensive workloads. On some production workloads at M...
null
null
null
[PATCH v4 0/5] mm: zone lock tracepoint instrumentation
Add thin wrappers around zone lock acquire/release operations. This prepares the code for future tracepoint instrumentation without modifying individual call sites. Centralizing zone lock operations behind wrappers allows future instrumentation or debugging hooks to be added without touching all users. No functional ...
{ "author": "Dmitry Ilvokhin <d@ilvokhin.com>", "date": "Fri, 27 Feb 2026 16:00:23 +0000", "is_openbsd": false, "thread_id": "cover.1772206930.git.d@ilvokhin.com.mbox.gz" }
lkml_critique
linux-mm
Zone lock contention can significantly impact allocation and reclaim latency, as it is a central synchronization point in the page allocator and reclaim paths. Improved visibility into its behavior is therefore important for diagnosing performance issues in memory-intensive workloads. On some production workloads at M...
null
null
null
[PATCH v4 0/5] mm: zone lock tracepoint instrumentation
Compaction uses compact_lock_irqsave(), which currently operates on a raw spinlock_t pointer so it can be used for both zone->lock and lruvec->lru_lock. Since zone lock operations are now wrapped, compact_lock_irqsave() can no longer directly operate on a spinlock_t when the lock belongs to a zone. Split the helper in...
{ "author": "Dmitry Ilvokhin <d@ilvokhin.com>", "date": "Fri, 27 Feb 2026 16:00:25 +0000", "is_openbsd": false, "thread_id": "cover.1772206930.git.d@ilvokhin.com.mbox.gz" }
lkml_critique
linux-mm
Zone lock contention can significantly impact allocation and reclaim latency, as it is a central synchronization point in the page allocator and reclaim paths. Improved visibility into its behavior is therefore important for diagnosing performance issues in memory-intensive workloads. On some production workloads at M...
null
null
null
[PATCH v4 0/5] mm: zone lock tracepoint instrumentation
This intentionally breaks direct users of zone->lock at compile time so all call sites are converted to the zone lock wrappers. Without the rename, present and future out-of-tree code could continue using spin_lock(&zone->lock) and bypass the wrappers and tracing infrastructure. No functional change intended. Suggest...
{ "author": "Dmitry Ilvokhin <d@ilvokhin.com>", "date": "Fri, 27 Feb 2026 16:00:26 +0000", "is_openbsd": false, "thread_id": "cover.1772206930.git.d@ilvokhin.com.mbox.gz" }
lkml_critique
linux-mm
Zone lock contention can significantly impact allocation and reclaim latency, as it is a central synchronization point in the page allocator and reclaim paths. Improved visibility into its behavior is therefore important for diagnosing performance issues in memory-intensive workloads. On some production workloads at M...
null
null
null
[PATCH v4 0/5] mm: zone lock tracepoint instrumentation
Add tracepoint instrumentation to zone lock acquire/release operations via the previously introduced wrappers. The implementation follows the mmap_lock tracepoint pattern: a lightweight inline helper checks whether the tracepoint is enabled and calls into an out-of-line helper when tracing is active. When CONFIG_TRACI...
{ "author": "Dmitry Ilvokhin <d@ilvokhin.com>", "date": "Fri, 27 Feb 2026 16:00:27 +0000", "is_openbsd": false, "thread_id": "cover.1772206930.git.d@ilvokhin.com.mbox.gz" }
lkml_critique
linux-mm
Zone lock contention can significantly impact allocation and reclaim latency, as it is a central synchronization point in the page allocator and reclaim paths. Improved visibility into its behavior is therefore important for diagnosing performance issues in memory-intensive workloads. On some production workloads at M...
null
null
null
[PATCH v4 0/5] mm: zone lock tracepoint instrumentation
Replace direct zone lock acquire/release operations with the newly introduced wrappers. The changes are purely mechanical substitutions. No functional change intended. Locking semantics and ordering remain unchanged. The compaction path is left unchanged for now and will be handled separately in the following patch d...
{ "author": "Dmitry Ilvokhin <d@ilvokhin.com>", "date": "Fri, 27 Feb 2026 16:00:24 +0000", "is_openbsd": false, "thread_id": "cover.1772206930.git.d@ilvokhin.com.mbox.gz" }
lkml_critique
linux-mm
This is a follow-up to the previous work [1], to support batched checking of the young flag for MGLRU. Similarly, batched checking of young flag for large folios can improve performance during large-folio reclamation when MGLRU is enabled. I observed noticeable performance improvements (see patch 5) on an Arm64 machin...
null
null
null
[PATCH v2 0/6] support batched checking of the young flag for MGLRU
People have already complained that these *_clear_young_notify() related macros are very ugly, so let's use inline helpers to make them more readable. In addition, we cannot implement these inline helper functions in the mmu_notifier.h file, because some arch-specific files will include the mmu_notifier.h, which intro...
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Fri, 27 Feb 2026 17:44:35 +0800", "is_openbsd": false, "thread_id": "589d743f4e048dc749002a7e1a1aec5d511c406b.1772185080.git.baolin.wang@linux.alibaba.com.mbox.gz" }
lkml_critique
linux-mm
This is a follow-up to the previous work [1], to support batched checking of the young flag for MGLRU. Similarly, batched checking of young flag for large folios can improve performance during large-folio reclamation when MGLRU is enabled. I observed noticeable performance improvements (see patch 5) on an Arm64 machin...
null
null
null
[PATCH v2 0/6] support batched checking of the young flag for MGLRU
Currently, MGLRU will call ptep_test_and_clear_young_notify() to check and clear the young flag for each PTE sequentially, which is inefficient for large folios reclamation. Moreover, on Arm64 architecture, which supports contiguous PTEs, the Arm64- specific ptep_test_and_clear_young() already implements an optimizati...
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Fri, 27 Feb 2026 17:44:38 +0800", "is_openbsd": false, "thread_id": "589d743f4e048dc749002a7e1a1aec5d511c406b.1772185080.git.baolin.wang@linux.alibaba.com.mbox.gz" }
lkml_critique
linux-mm
This is a follow-up to the previous work [1], to support batched checking of the young flag for MGLRU. Similarly, batched checking of young flag for large folios can improve performance during large-folio reclamation when MGLRU is enabled. I observed noticeable performance improvements (see patch 5) on an Arm64 machin...
null
null
null
[PATCH v2 0/6] support batched checking of the young flag for MGLRU
Rename ptep/pmdp_clear_young_notify() to ptep/pmdp_test_and_clear_young_notify() to make the function names consistent. Suggested-by: David Hildenbrand (Arm) <david@kernel.org> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/internal.h | 8 ++++---- mm/vmscan.c | 8 ++++---- 2 files changed, 8 ins...
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Fri, 27 Feb 2026 17:44:36 +0800", "is_openbsd": false, "thread_id": "589d743f4e048dc749002a7e1a1aec5d511c406b.1772185080.git.baolin.wang@linux.alibaba.com.mbox.gz" }
lkml_critique
linux-mm
This is a follow-up to the previous work [1], to support batched checking of the young flag for MGLRU. Similarly, batched checking of young flag for large folios can improve performance during large-folio reclamation when MGLRU is enabled. I observed noticeable performance improvements (see patch 5) on an Arm64 machin...
null
null
null
[PATCH v2 0/6] support batched checking of the young flag for MGLRU
Use the batched helper test_and_clear_young_ptes_notify() to check and clear the young flag to improve the performance during large folio reclamation when MGLRU is enabled. Meanwhile, we can also support batched checking the young and dirty flag when MGLRU walks the mm's pagetable to update the folios' generation coun...
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Fri, 27 Feb 2026 17:44:39 +0800", "is_openbsd": false, "thread_id": "589d743f4e048dc749002a7e1a1aec5d511c406b.1772185080.git.baolin.wang@linux.alibaba.com.mbox.gz" }
lkml_critique
linux-mm
This is a follow-up to the previous work [1], to support batched checking of the young flag for MGLRU. Similarly, batched checking of young flag for large folios can improve performance during large-folio reclamation when MGLRU is enabled. I observed noticeable performance improvements (see patch 5) on an Arm64 machin...
null
null
null
[PATCH v2 0/6] support batched checking of the young flag for MGLRU
Implement the Arm64 architecture-specific test_and_clear_young_ptes() to enable batched checking of young flags, improving performance during large folio reclamation when MGLRU is enabled. While we're at it, simplify ptep_test_and_clear_young() by calling test_and_clear_young_ptes(). Since callers guarantee that PTEs ...
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Fri, 27 Feb 2026 17:44:40 +0800", "is_openbsd": false, "thread_id": "589d743f4e048dc749002a7e1a1aec5d511c406b.1772185080.git.baolin.wang@linux.alibaba.com.mbox.gz" }
lkml_critique
linux-mm
This is a follow-up to the previous work [1], to support batched checking of the young flag for MGLRU. Similarly, batched checking of young flag for large folios can improve performance during large-folio reclamation when MGLRU is enabled. I observed noticeable performance improvements (see patch 5) on an Arm64 machin...
null
null
null
[PATCH v2 0/6] support batched checking of the young flag for MGLRU
The folio_referenced() is used to test whether a folio was referenced during reclaim. Moreover, ZONE_DEVICE folios are controlled by their device driver, have a lifetime tied to that driver, and are never placed on the LRU list. That means we should never try to reclaim ZONE_DEVICE folios, so add a warning to catch thi...
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Fri, 27 Feb 2026 17:44:37 +0800", "is_openbsd": false, "thread_id": "589d743f4e048dc749002a7e1a1aec5d511c406b.1772185080.git.baolin.wang@linux.alibaba.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On 2/25/2026 2:27 AM, Pranjal Shrivastava wrote: Thanks Leon for the review. This crash started after commit 30280eee2db1 ("iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg"). Yes, this will also fix the crash. Thanks for the feedback, Pranjal. To clarify: are you suggesting we handle non-page-backed mappi...
{ "author": "Ashish Mhetre <amhetre@nvidia.com>", "date": "Wed, 25 Feb 2026 10:19:41 +0530", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Tue, Feb 24, 2026 at 08:57:56PM +0000, Pranjal Shrivastava wrote: Yes, i came to the same conclusion, just explained why it worked before. pfn_valid() is a relatively expensive function [1] to invoke in the data path, and is_pci_p2pdma_page() ends up being called in these execution flows. [1] https://elixir.boot...
{ "author": "Leon Romanovsky <leon@kernel.org>", "date": "Wed, 25 Feb 2026 09:50:00 +0200", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Wed, Feb 25, 2026 at 10:19:41AM +0530, Ashish Mhetre wrote: The latter one. The bug is in callers which used wrong API, they need to be adapted. Thanks
{ "author": "Leon Romanovsky <leon@kernel.org>", "date": "Wed, 25 Feb 2026 09:56:09 +0200", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Wed, Feb 25, 2026 at 09:56:09AM +0200, Leon Romanovsky wrote: Yup, I meant the latter. Yes, the thing is, if the caller already knows that the region to be mapped is NOT struct page-backed, then why does it use dma_map_sg variants? Thanks Praan
{ "author": "Pranjal Shrivastava <praan@google.com>", "date": "Wed, 25 Feb 2026 20:11:29 +0000", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Wed, Feb 25, 2026 at 09:50:00AM +0200, Leon Romanovsky wrote: Ack. Right, that makes sense. Ideally, it shouldn't be there at either of the places (iommu_dma_map_sg or is_pci_p2pdma_page()). [--->8---] Thanks, Praan
{ "author": "Pranjal Shrivastava <praan@google.com>", "date": "Wed, 25 Feb 2026 20:15:24 +0000", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Wed, Feb 25, 2026 at 08:11:29PM +0000, Pranjal Shrivastava wrote: Before dma_map_phys() was added, there was no reliable way to DMA‑map such memory, and using dma_map_sg() was a workaround that happened to work. I'm not sure whether it worked by design or by accident, but the correct approach now is to use dma_map_...
{ "author": "Leon Romanovsky <leon@kernel.org>", "date": "Thu, 26 Feb 2026 09:58:06 +0200", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On 2/26/2026 1:28 PM, Leon Romanovsky wrote: Thanks Leon and Pranjal for the detailed feedback. I'll update our callers to use dma_map_phys() for non-page-backed buffers. One question: would it make sense to add a check in iommu_dma_map_sg to fail gracefully when non-page-backed buffers are passed, instead of crashi...
{ "author": "Ashish Mhetre <amhetre@nvidia.com>", "date": "Fri, 27 Feb 2026 11:16:02 +0530", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On 2026-02-27 5:46 am, Ashish Mhetre wrote: No, it is the responsibility of drivers not to abuse kernel APIs inappropriately. Checking for misuse adds overhead that penalises correct users. dma_map_page/sg on non-page-backed memory has never been valid, and it would only have been system-configuration-dependent luc...
{ "author": "Robin Murphy <robin.murphy@arm.com>", "date": "Fri, 27 Feb 2026 14:05:01 +0000", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Fri, Feb 27, 2026 at 11:16:02AM +0530, Ashish Mhetre wrote: Ack. In my opinion, the answer is no, since this is almost like the "should the kernel protect developers from themselves" debate.. we should be a little dramatic to make sure the developer doesn't call the wrong API. Sure, we could return a DMA_MAPPING_...
{ "author": "Pranjal Shrivastava <praan@google.com>", "date": "Fri, 27 Feb 2026 14:08:42 +0000", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > When mapping scatter-gather entries that reference reserved > > memory regions without struct page backing (e.g., bootloader created > > carveouts), is_pci_p2pdma_page() dereferences the...
null
null
null
Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
On Fri, Feb 27, 2026 at 02:08:42PM +0000, Pranjal Shrivastava wrote: This is absolutely illegal and a driver bug to put non-struct page memory into a scatter list. It was never an acceptable "work around". What driver is doing this?? If you want to improve robustness add some pfn_valid/etc checks under the CONFING D...
{ "author": "Jason Gunthorpe <jgg@nvidia.com>", "date": "Fri, 27 Feb 2026 10:13:30 -0400", "is_openbsd": false, "thread_id": "20260227141330.GK5933@nvidia.com.mbox.gz" }
lkml_critique
linux-mm
Today, page reporting sets page_reporting_order in two ways: (1) page_reporting.page_reporting_order cmdline parameter (2) Driver can pass order while registering itself. In both cases, order zero is ignored by free page reporting because it is used to set page_reporting_order to a default value, like MAX_PAGE_ORDER....
null
null
null
[PATCH v1 0/4] Allow order zero pages in page reporting
Drivers can pass order of pages to be reported while registering itself. Today, this is a magic number, 0. Label this with PAGE_REPORTING_DEFAULT_ORDER and check for it when the driver is being registered. Signed-off-by: Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com> --- include/linux/page_reporting.h | 1 + mm/...
{ "author": "Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 06:06:52 -0800", "is_openbsd": false, "thread_id": "20260227140655.360696-1-yuvraj.sakshith@oss.qualcomm.com.mbox.gz" }
lkml_critique
linux-mm
Today, page reporting sets page_reporting_order in two ways: (1) page_reporting.page_reporting_order cmdline parameter (2) Driver can pass order while registering itself. In both cases, order zero is ignored by free page reporting because it is used to set page_reporting_order to a default value, like MAX_PAGE_ORDER....
null
null
null
[PATCH v1 0/4] Allow order zero pages in page reporting
virtio_balloon page reporting order is set to MAX_PAGE_ORDER implicitly as vb->prdev.order is never initialised and is auto-set to zero. Explicitly mention usage of default page order by making use of PAGE_REPORTING_DEFAULT ORDER fallback value. Signed-off-by: Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com> --- d...
{ "author": "Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 06:06:53 -0800", "is_openbsd": false, "thread_id": "20260227140655.360696-1-yuvraj.sakshith@oss.qualcomm.com.mbox.gz" }
lkml_critique
linux-mm
Today, page reporting sets page_reporting_order in two ways: (1) page_reporting.page_reporting_order cmdline parameter (2) Driver can pass order while registering itself. In both cases, order zero is ignored by free page reporting because it is used to set page_reporting_order to a default value, like MAX_PAGE_ORDER....
null
null
null
[PATCH v1 0/4] Allow order zero pages in page reporting
Explicitly mention page reporting order to be set to default value using PAGE_REPORTING_DEFAULT_ORDER fallback value. Reviewed-by: David Hildenbrand (Arm) <david@kernel.org> Signed-off-by: Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com> --- drivers/hv/hv_balloon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion...
{ "author": "Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 06:06:54 -0800", "is_openbsd": false, "thread_id": "20260227140655.360696-1-yuvraj.sakshith@oss.qualcomm.com.mbox.gz" }
lkml_critique
linux-mm
Today, page reporting sets page_reporting_order in two ways: (1) page_reporting.page_reporting_order cmdline parameter (2) Driver can pass order while registering itself. In both cases, order zero is ignored by free page reporting because it is used to set page_reporting_order to a default value, like MAX_PAGE_ORDER....
null
null
null
[PATCH v1 0/4] Allow order zero pages in page reporting
PAGE_REPORTING_DEFAULT_ORDER is now set to zero. This means, pages of order zero cannot be reported to a client/driver -- as zero is used to signal a fallback to MAX_PAGE_ORDER. Change PAGE_REPORTING_DEFAULT_ORDER to (-1), so that zero can be used as a valid order with which pages can be reported. Signed-off-by: Yuvr...
{ "author": "Yuvraj Sakshith <yuvraj.sakshith@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 06:06:55 -0800", "is_openbsd": false, "thread_id": "20260227140655.360696-1-yuvraj.sakshith@oss.qualcomm.com.mbox.gz" }
lkml_critique
linux-mm
In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use the real alloc/free bytes (i.e., nr_acct) for accounting, rather than nr_bytes. Fixes: 200577f69f29 ("memcg: objcg stock trylock without irq disabling") Cc: stable@vger.kernel.org Signed-off-by: Hao Li <hao.li@linux.dev> --- mm/memcontrol.c | 2...
null
null
null
[PATCH] memcg: fix slab accounting in refill_obj_stock() trylock path
On Thu, Feb 26, 2026 at 07:51:37PM +0800, Hao Li wrote: Thanks for the fix. Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
{ "author": "Shakeel Butt <shakeel.butt@linux.dev>", "date": "Thu, 26 Feb 2026 05:39:00 -0800", "is_openbsd": false, "thread_id": "aaGTVWumz4jYEx9L@cmpxchg.org.mbox.gz" }
lkml_critique
linux-mm
In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use the real alloc/free bytes (i.e., nr_acct) for accounting, rather than nr_bytes. Fixes: 200577f69f29 ("memcg: objcg stock trylock without irq disabling") Cc: stable@vger.kernel.org Signed-off-by: Hao Li <hao.li@linux.dev> --- mm/memcontrol.c | 2...
null
null
null
[PATCH] memcg: fix slab accounting in refill_obj_stock() trylock path
On Thu, Feb 26, 2026 at 02:44:02PM +0100, Vlastimil Babka wrote: The user-visible impact is that the NR_SLAB_RECLAIMABLE_B and NR_SLAB_UNRECLAIMABLE_B stats can end up being incorrect. For example, if a user allocates a 6144-byte object, then before this fix refill_obj_stock() calls mod_objcg_mlstate(..., nr_bytes=20...
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Fri, 27 Feb 2026 09:01:27 +0800", "is_openbsd": false, "thread_id": "aaGTVWumz4jYEx9L@cmpxchg.org.mbox.gz" }
lkml_critique
linux-mm
In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use the real alloc/free bytes (i.e., nr_acct) for accounting, rather than nr_bytes. Fixes: 200577f69f29 ("memcg: objcg stock trylock without irq disabling") Cc: stable@vger.kernel.org Signed-off-by: Hao Li <hao.li@linux.dev> --- mm/memcontrol.c | 2...
null
null
null
[PATCH] memcg: fix slab accounting in refill_obj_stock() trylock path
On 2/27/26 02:01, Hao Li wrote: Thanks, I'm sure Andrew will amend the changelog with those useful details. Weird that we went since 6.16 with nobody noticing the stats were off - it sounds they could get really way off?
{ "author": "Vlastimil Babka <vbabka@suse.com>", "date": "Fri, 27 Feb 2026 08:46:18 +0100", "is_openbsd": false, "thread_id": "aaGTVWumz4jYEx9L@cmpxchg.org.mbox.gz" }
lkml_critique
linux-mm
In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use the real alloc/free bytes (i.e., nr_acct) for accounting, rather than nr_bytes. Fixes: 200577f69f29 ("memcg: objcg stock trylock without irq disabling") Cc: stable@vger.kernel.org Signed-off-by: Hao Li <hao.li@linux.dev> --- mm/memcontrol.c | 2...
null
null
null
[PATCH] memcg: fix slab accounting in refill_obj_stock() trylock path
On Fri, Feb 27, 2026 at 08:46:18AM +0100, Vlastimil Babka wrote: Got it. Thanks. Indeed, it does seem a bit unbelievable. I suspect the conditions required for this issue to occur are quite strict: a process context first hold the obj_stock.lock, then get interrupted by an IRQ, and the IRQ path also reach refill_obj...
{ "author": "Hao Li <hao.li@linux.dev>", "date": "Fri, 27 Feb 2026 16:37:16 +0800", "is_openbsd": false, "thread_id": "aaGTVWumz4jYEx9L@cmpxchg.org.mbox.gz" }
lkml_critique
linux-mm
In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use the real alloc/free bytes (i.e., nr_acct) for accounting, rather than nr_bytes. Fixes: 200577f69f29 ("memcg: objcg stock trylock without irq disabling") Cc: stable@vger.kernel.org Signed-off-by: Hao Li <hao.li@linux.dev> --- mm/memcontrol.c | 2...
null
null
null
[PATCH] memcg: fix slab accounting in refill_obj_stock() trylock path
On Thu, Feb 26, 2026 at 07:51:37PM +0800, Hao Li wrote: Oops. Yes, I suppose the contended case is quite rare (this is CPU local), so I'm not surprised this went unnoticed for so long. Acked-by: Johannes Weiner <hannes@cmpxchg.org>
{ "author": "Johannes Weiner <hannes@cmpxchg.org>", "date": "Fri, 27 Feb 2026 07:51:33 -0500", "is_openbsd": false, "thread_id": "aaGTVWumz4jYEx9L@cmpxchg.org.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
This code will be needed elsewhere in a following patch. Split out the trivial code move for easy review. This changes the logging slightly: instead of panic() directly reporting the level of the failure, there is now a generic panic message which will be preceded by a separate warn that reports the level of the failu...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:26 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Various security features benefit from having process-local address mappings. Examples include no-direct-map guest_memfd [2] significant optimizations for ASI [1]. As pointed out by Andy in [0], x86 already has a PGD entry that is local to the mm, which is used for the LDT. So, simply redefine that entry's region as ...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:27 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
In commit bfe3d8f6313d ("x86/tlb: Restrict access to tlbstate") some low-level logic (the important detail here is flush_tlb_info) was hidden from modules, along with functions associated with that data. Later, the set of functions defined here changed and there are now a bunch of flush_tlb_*() functions that do not d...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:28 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
The mermap provides a fast way to create ephemeral mm-local mappings of physical pages. The purpose of this is to access pages that have been removed from the direct map. Potential use cases are: 1. For zeroing __GFP_UNMAPPED pages (added in a later patch). 2. For populating guest_memfd pages that are protected by th...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:29 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Some simple smoke-tests for the mermap. Mainly aiming to test: 1. That there aren't any silly off-by-ones. 2. That the pagetables are not completely broken. 3. That the TLB appears to get flushed basically when expected. This last point requires a bit of ifdeffery to detect when the flushing has been performed. Si...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:30 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Later patches will rearrange the free areas, but there are a couple of places that iterate over them with the assumption that they have the current structure. It seems ideally, code outside of mm should not be directly aware of struct free_area in the first place, but that awareness seems relatively harmless so just m...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:31 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
This function currently returns a signed integer that encodes status in-band, as negative numbers, along with a migratetype. This function is about to be updated to a mode where this in-band signaling no longer makes sense. Therefore, switch to a more explicit/verbose style that encodes the status and migratetype sepa...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:32 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Since migratetype are a sub-element of freetype, move the pure definitions into the new freetype.h. This will enable referring to these raw types from pageblock-flags.h. Signed-off-by: Brendan Jackman <jackmanb@google.com> --- include/linux/freetype.h | 84 ++++++++++++++++++++++++++++++++++++++++++++++++ include/li...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:34 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
This is preparation for teaching the page allocator to break up free pages according to properties that have nothing to do with mobility. For example it can be used to allocate pages that are non-present in the physmap, or pages that are sensitive in ASI. For these usecases, certain allocator behaviours are desirable:...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:33 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Create __GFP_UNMAPPED, which requests pages that are not present in the direct map. Since this feature has a cost (e.g. more freelists), it's behind a kconfig. Unlike other conditionally-defined GFP flags, it doesn't fall back to being 0. This prevents building code that uses __GFP_UNMAPPED but doesn't depend on the ne...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:35 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
A later patch will complicate the definition of these masks, this is a preparatory patch to make that patch easier to review. - More masks will be needed, so add a PAGEBLOCK_ prefix to the names to avoid polluting the "global namespace" too much. - This makes MIGRATETYPE_AND_ISO_MASK start to look pretty long. Well...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:36 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
In preparation for implementing allocation from FREETYPE_UNMAPPED lists. Since it works nicely with the existing allocator logic, and also offers a simple way to amortize TLB flushing costs, __GFP_UNMAPPED will be implemented by changing mappings at pageblock granularity. Therefore, encode the mapping state in the pag...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:37 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
The ifdefs are not technically needed here, everything used here is always defined. They aren't doing much harm right now but a following patch will complicate these functions. Switching to IS_ENABLED() makes the code a bit less tiresome to read. Signed-off-by: Brendan Jackman <jackmanb@google.com> --- mm/page_alloc...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:38 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
The normal freelists are already separated by this flag, so now update the pcplists accordingly. This follows the most "obvious" design where __GFP_UNMAPPED is supported at arbitrary orders. If necessary, it would be possible to avoid the proliferation of pcplists by restricting orders that can be allocated from them ...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:39 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Commit 1ebbb21811b7 ("mm/page_alloc: explicitly define how __GFP_HIGH non-blocking allocations accesses reserves") renamed ALLOC_HARDER to ALLOC_NON_BLOCK because the former is "a vague description". However, vagueness is accurate here, this is a vague flag. It is not set for __GFP_NOMEMALLOC. It doesn't really mean "...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:40 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
This flag is set unless we can be sure the caller isn't in an atomic context. The allocator will soon start needing to call set_direct_map_* APIs which cannot be called with IRQs off. It will need to do this even before direct reclaim is possible. Despite the fact that, in principle, ALLOC_NOBLOCK is distinct from __...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:41 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
Currently __GFP_UNMAPPED allocs will always fail because, although the lists exist to hold them, there is no way to actually create an unmapped page block. This commit adds one, and also the logic to map it back again when that's needed. Doing this at pageblock granularity ensures that the pageblock flags can be used ...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:42 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }
lkml_critique
linux-mm
.:: What? Why? This series adds support for efficiently allocating pages that are not present in the direct map. This is instrumental to two different immediate goals: 1. This supports the effort to remove guest_memfd memory from the direct map [0]. One of the challenges faced in that effort has been efficientl...
null
null
null
[PATCH RFC 00/19] mm: Add __GFP_UNMAPPED
The pages being zeroed here are unmapped, so they can't be zeroed via the direct map. Temporarily mapping them in the direct map is not possible because: - In general this requires allocating pagetables, - Unmapping them would require a TLB shootdown, which can't be done in general from the allocator (x86 requires ...
{ "author": "Brendan Jackman <jackmanb@google.com>", "date": "Wed, 25 Feb 2026 16:34:43 +0000", "is_openbsd": false, "thread_id": "DGPP0DOFCGBQ.PZZ9GMPEEPG6@google.com.mbox.gz" }