source
large_stringclasses
2 values
subject
large_stringclasses
112 values
code
large_stringclasses
112 values
critique
large_stringlengths
61
3.04M
metadata
dict
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
The PKCS#7 code in sign-file allows for signing only with SHA-1. Since SHA-1 support for module signing has been removed, drop PKCS#7 support in favor of using only CMS. The use of the PKCS#7 code is selected by the following: #if defined(LIBRESSL_VERSION_NUMBER) || \ OPENSSL_VERSION_NUMBER < 0x10000000L || \ defined(OPENSSL_NO_CMS) #define USE_PKCS7 #endif Looking at the individual ifdefs: * LIBRESSL_VERSION_NUMBER: LibreSSL added the CMS implementation from OpenSSL in 3.1.0, making the ifdef no longer relevant. This version was released on April 8, 2020. * OPENSSL_VERSION_NUMBER < 0x10000000L: OpenSSL 1.0.0 was released on March 29, 2010. Supporting earlier versions should no longer be necessary. The file Documentation/process/changes.rst already states that at least version 1.0.0 is required to build the kernel. * OPENSSL_NO_CMS: OpenSSL can be configured with "no-cms" to disable the CMS support. In this case, sign-file will no longer be usable. The CMS support is now required. In practice, since distributions now typically sign modules with SHA-2, for which sign-file already required CMS support, removing PKCS#7 shouldn't cause any issues. Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> --- scripts/sign-file.c | 66 +++------------------------------------------ 1 file changed, 3 insertions(+), 63 deletions(-) diff --git a/scripts/sign-file.c b/scripts/sign-file.c index 7070245edfc1..16f2bf2e1e3c 100644 --- a/scripts/sign-file.c +++ b/scripts/sign-file.c @@ -24,6 +24,7 @@ #include <arpa/inet.h> #include <openssl/opensslv.h> #include <openssl/bio.h> +#include <openssl/cms.h> #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/err.h> @@ -39,29 +40,6 @@ #endif #include "ssl-common.h" -/* - * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to - * assume that it's not available and its header file is missing and that we - * should use PKCS#7 instead. Switching to the older PKCS#7 format restricts - * the options we have on specifying the X.509 certificate we want. - * - * Further, older versions of OpenSSL don't support manually adding signers to - * the PKCS#7 message so have to accept that we get a certificate included in - * the signature message. Nor do such older versions of OpenSSL support - * signing with anything other than SHA1 - so we're stuck with that if such is - * the case. - */ -#if defined(LIBRESSL_VERSION_NUMBER) || \ - OPENSSL_VERSION_NUMBER < 0x10000000L || \ - defined(OPENSSL_NO_CMS) -#define USE_PKCS7 -#endif -#ifndef USE_PKCS7 -#include <openssl/cms.h> -#else -#include <openssl/pkcs7.h> -#endif - struct module_signature { uint8_t algo; /* Public-key crypto algorithm [0] */ uint8_t hash; /* Digest algorithm [0] */ @@ -228,15 +206,10 @@ int main(int argc, char **argv) bool raw_sig = false; unsigned char buf[4096]; unsigned long module_size, sig_size; - unsigned int use_signed_attrs; const EVP_MD *digest_algo; EVP_PKEY *private_key; -#ifndef USE_PKCS7 CMS_ContentInfo *cms = NULL; unsigned int use_keyid = 0; -#else - PKCS7 *pkcs7 = NULL; -#endif X509 *x509; BIO *bd, *bm; int opt, n; @@ -246,21 +219,13 @@ int main(int argc, char **argv) key_pass = getenv("KBUILD_SIGN_PIN"); -#ifndef USE_PKCS7 - use_signed_attrs = CMS_NOATTR; -#else - use_signed_attrs = PKCS7_NOATTR; -#endif - do { opt = getopt(argc, argv, "sdpk"); switch (opt) { case 's': raw_sig = true; break; case 'p': save_sig = true; break; case 'd': sign_only = true; save_sig = true; break; -#ifndef USE_PKCS7 case 'k': use_keyid = CMS_USE_KEYID; break; -#endif case -1: break; default: format(); } @@ -289,14 +254,6 @@ int main(int argc, char **argv) replace_orig = true; } -#ifdef USE_PKCS7 - if (strcmp(hash_algo, "sha1") != 0) { - fprintf(stderr, "sign-file: %s only supports SHA1 signing\n", - OPENSSL_VERSION_TEXT); - exit(3); - } -#endif - /* Open the module file */ bm = BIO_new_file(module_name, "rb"); ERR(!bm, "%s", module_name); @@ -314,7 +271,6 @@ int main(int argc, char **argv) digest_algo = EVP_get_digestbyname(hash_algo); ERR(!digest_algo, "EVP_get_digestbyname"); -#ifndef USE_PKCS7 /* Load the signature message from the digest buffer. */ cms = CMS_sign(NULL, NULL, NULL, NULL, CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | @@ -323,19 +279,12 @@ int main(int argc, char **argv) ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo, CMS_NOCERTS | CMS_BINARY | - CMS_NOSMIMECAP | use_keyid | - use_signed_attrs), + CMS_NOSMIMECAP | CMS_NOATTR | + use_keyid), "CMS_add1_signer"); ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) != 1, "CMS_final"); -#else - pkcs7 = PKCS7_sign(x509, private_key, NULL, bm, - PKCS7_NOCERTS | PKCS7_BINARY | - PKCS7_DETACHED | use_signed_attrs); - ERR(!pkcs7, "PKCS7_sign"); -#endif - if (save_sig) { char *sig_file_name; BIO *b; @@ -344,13 +293,8 @@ int main(int argc, char **argv) "asprintf"); b = BIO_new_file(sig_file_name, "wb"); ERR(!b, "%s", sig_file_name); -#ifndef USE_PKCS7 ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) != 1, "%s", sig_file_name); -#else - ERR(i2d_PKCS7_bio(b, pkcs7) != 1, - "%s", sig_file_name); -#endif BIO_free(b); } @@ -377,11 +321,7 @@ int main(int argc, char **argv) module_size = BIO_number_written(bd); if (!raw_sig) { -#ifndef USE_PKCS7 ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) != 1, "%s", dest_name); -#else - ERR(i2d_PKCS7_bio(bd, pkcs7) != 1, "%s", dest_name); -#endif } else { BIO *b; -- 2.51.1
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Tue, 11 Nov 2025 16:48:32 +0100", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
Hi Petr, On Tue, Nov 11, 2025 at 7:49 AM Petr Pavlu <petr.pavlu@suse.com> wrote: It looks like GKI just uses the defaults here. Overall, Android doesn't rely on module signing for security, it's only used to differentiate between module types. Dropping SHA-1 support sounds like a good idea to me. For the series: Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Sami
{ "author": "Sami Tolvanen <samitolvanen@google.com>", "date": "Tue, 11 Nov 2025 08:22:34 -0800", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Tue, 2025-11-11 at 16:48 +0100, Petr Pavlu wrote: The change log is a bit alarmist. CMS really *is* PKCS7 and most literature will refer to CMS as PKCS7. What you're really deprecating is the use of the PKCS7_sign() API which can only produce SHA-1 Signatures ... openssl is fully capable of producing any hash PKCS7 signatures using a different PKCS7_... API set but the CMS_... API is newer. The point being the module signature type is still set to PKEY_ID_PKCS7 so it doesn't square with the commit log saying "drop PKCS#7 support". What you really mean is only use the openssl CMS_... API for producing PKCS7 signatures. Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Tue, 11 Nov 2025 11:53:34 -0500", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Tue, Nov 11, 2025 at 04:48:31PM +0100, Petr Pavlu wrote: Agreed. Reviewed-by: Aaron Tomlin <atomlin@atomlin.com> -- Aaron Tomlin
{ "author": "Aaron Tomlin <atomlin@atomlin.com>", "date": "Tue, 11 Nov 2025 17:37:28 -0500", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On 11/11/25 5:53 PM, James Bottomley wrote: Ok, I plan to update the description to the following in v2: sign-file: Use only the OpenSSL CMS API for signing The USE_PKCS7 code in sign-file utilizes PKCS7_sign(), which allows signing only with SHA-1. Since SHA-1 support for module signing has been removed, drop the use of the OpenSSL PKCS7 API by the tool in favor of using only the newer CMS API. The use of the PKCS7 API is selected by the following: #if defined(LIBRESSL_VERSION_NUMBER) || \ OPENSSL_VERSION_NUMBER < 0x10000000L || \ defined(OPENSSL_NO_CMS) #define USE_PKCS7 #endif Looking at the individual ifdefs: * LIBRESSL_VERSION_NUMBER: LibreSSL added the CMS API implementation from OpenSSL in 3.1.0, making the ifdef no longer relevant. This version was released on April 8, 2020. * OPENSSL_VERSION_NUMBER < 0x10000000L: OpenSSL 1.0.0 was released on March 29, 2010. Supporting earlier versions should no longer be necessary. The file Documentation/process/changes.rst already states that at least version 1.0.0 is required to build the kernel. * OPENSSL_NO_CMS: OpenSSL can be configured with "no-cms" to disable CMS support. In this case, sign-file will no longer be usable. The CMS API support is now required. In practice, since distributions now typically sign modules with SHA-2, for which sign-file already required CMS API support, removing the USE_PKCS7 code shouldn't cause any issues. -- Thanks, Petr
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Wed, 12 Nov 2025 14:51:24 +0100", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Wed, 2025-11-12 at 14:51 +0100, Petr Pavlu wrote: Much better, thanks! Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Wed, 12 Nov 2025 10:05:57 -0500", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
Petr Pavlu <petr.pavlu@suse.com> wrote: We're looking at moving to ML-DSA, and the CMS support there is slightly dodgy at the moment, so we need to hold off a bit on this change. Patch 1, removing the option to sign with SHA-1 from the kernel is fine, but doesn't stop things that are signed with SHA-1 from being verified. David
{ "author": "David Howells <dhowells@redhat.com>", "date": "Wed, 12 Nov 2025 15:36:57 +0000", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Wed, 2025-11-12 at 15:36 +0000, David Howells wrote: How will removing PKCS7_sign, which can only do sha1 signatures affect that? Is the dodginess that the PKCS7_... API is better than CMS_... for PQS at the moment? In which case we could pretty much do a rip and replace of the CMS_ API if necessary, but that would be a completely separate patch. Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Wed, 12 Nov 2025 10:47:23 -0500", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
James Bottomley <James.Bottomley@HansenPartnership.com> wrote: OpenSSL-3.5.1's ML-DSA support isn't completely right - in particular CMS_NOATTR is not currently supported. I believe there is a fix in the works there, but I doubt it has made it to all the distributions yet. I'm only asking that we hold off a cycle; that will probably suffice. David
{ "author": "David Howells <dhowells@redhat.com>", "date": "Wed, 12 Nov 2025 15:52:40 +0000", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Wed, 2025-11-12 at 15:52 +0000, David Howells wrote: I get that PQC in openssl-3.5 is highly experimental, but that merely means we tell people not to use it for a while. However, what I don't see is how this impacts PKCS7_sign removal. The CMS API can do a sha1 signature if that's what people want and keeping the PKCS7_sign API won't prevent anyone with openssl-3.5 installed from trying a PQ signature. Right but why? Is your thought that we'll have to change the CMS_ code slightly and this might conflict? Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Wed, 12 Nov 2025 10:58:31 -0500", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Tue, 11 Nov 2025 16:48:30 +0100, Petr Pavlu wrote: Applied to modules-next, thanks! [1/2] module: Remove SHA-1 support for module signing commit: 148519a06304af4e6fbb82f20e1a4480e2c1b126 [2/2] sign-file: Use only the OpenSSL CMS API for signing commit: d7afd65b4acc775df872af30948dd7c196587169 Best regards, Sami
{ "author": "Sami Tolvanen <samitolvanen@google.com>", "date": "Mon, 22 Dec 2025 20:24:17 +0000", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
Here's an alternative patch that will allow PKCS#7 with the hash specified on the command line, removing the SHA1 restriction. David --- sign-file, pkcs7: Honour the hash parameter to sign-file Currently, the sign-file program rejects anything other than "sha1" as the hash parameter if it is going to produce a PKCS#7 message-based signature rather than a CMS message-based signature (though it then ignores this argument and uses whatever is selected as the default which might not be SHA1 and may actually reflect whatever is used to sign the X.509 certificate). Fix sign-file to actually use the specified hash when producing a PKCS#7 message rather than just accepting the default. Fixes: 283e8ba2dfde ("MODSIGN: Change from CMS to PKCS#7 signing if the openssl is too old") Signed-off-by: David Howells <dhowells@redhat.com> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Jarkko Sakkinen <jarkko@kernel.org> cc: Stephan Mueller <smueller@chronox.de> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: Eric Biggers <ebiggers@kernel.org> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org diff --git a/scripts/sign-file.c b/scripts/sign-file.c index 547b97097230..f0b7e5616b9a 100644 --- a/scripts/sign-file.c +++ b/scripts/sign-file.c @@ -56,6 +56,7 @@ defined(OPENSSL_NO_CMS) #define USE_PKCS7 #endif +#define USE_PKCS7 #ifndef USE_PKCS7 #include <openssl/cms.h> #else @@ -289,14 +290,6 @@ int main(int argc, char **argv) replace_orig = true; } -#ifdef USE_PKCS7 - if (strcmp(hash_algo, "sha1") != 0) { - fprintf(stderr, "sign-file: %s only supports SHA1 signing\n", - OPENSSL_VERSION_TEXT); - exit(3); - } -#endif - /* Open the module file */ bm = BIO_new_file(module_name, "rb"); ERR(!bm, "%s", module_name); @@ -348,10 +341,17 @@ int main(int argc, char **argv) "CMS_final"); #else - pkcs7 = PKCS7_sign(x509, private_key, NULL, bm, - PKCS7_NOCERTS | PKCS7_BINARY | - PKCS7_DETACHED | use_signed_attrs); + unsigned int flags = + PKCS7_NOCERTS | + PKCS7_BINARY | + PKCS7_DETACHED | + use_signed_attrs; + pkcs7 = PKCS7_sign(NULL, NULL, NULL, bm, flags); ERR(!pkcs7, "PKCS7_sign"); + + ERR(!PKCS7_sign_add_signer(pkcs7, x509, private_key, digest_algo, flags), + "PKS7_sign_add_signer"); + ERR(PKCS7_final(pkcs7, bm, flags) != 1, "PKCS7_final"); #endif if (save_sig) {
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 02 Feb 2026 11:24:22 +0000", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
David Howells <dhowells@redhat.com> wrote: Apologies, that line was so I could debug it and should've been removed. David
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 02 Feb 2026 11:27:39 +0000", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On 2/2/26 12:24 PM, David Howells wrote: Is it worth keeping this sign-file code that uses the OpenSSL PKCS7 API instead of having only one variant that uses the newer CMS API? -- Thanks, Petr
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Mon, 2 Feb 2026 13:25:06 +0100", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Mon, Feb 2, 2026 at 4:25 AM Petr Pavlu <petr.pavlu@suse.com> wrote: I agree that keeping only the CMS variant makes more sense. However, David, please let me know if you'd prefer that I drop the patch removing PKCS7 support from sign-file for now. I assumed you had no further objections since the discussion in the other sub-thread tapered off, but perhaps I misread that. Sami
{ "author": "Sami Tolvanen <samitolvanen@google.com>", "date": "Mon, 2 Feb 2026 09:01:19 -0800", "thread_id": "20251111154923.978181-1-petr.pavlu@suse.com.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
Replace direct comparisons of hv_result(status) against HV_STATUS_INSUFFICIENT_MEMORY with a new hv_result_needs_memory() helper function. This improves code readability and provides a consistent and extendable interface for checking out-of-memory conditions in hypercall results. No functional changes intended. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_proc.c | 14 ++++++++++++-- drivers/hv/mshv_root_hv_call.c | 20 ++++++++++---------- drivers/hv/mshv_root_main.c | 2 +- include/asm-generic/mshyperv.h | 3 +++ 4 files changed, 26 insertions(+), 13 deletions(-) diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index fbb4eb3901bb..e53204b9e05d 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -110,6 +110,16 @@ int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) } EXPORT_SYMBOL_GPL(hv_call_deposit_pages); +bool hv_result_needs_memory(u64 status) +{ + switch (hv_result(status)) { + case HV_STATUS_INSUFFICIENT_MEMORY: + return true; + } + return false; +} +EXPORT_SYMBOL_GPL(hv_result_needs_memory); + int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id) { struct hv_input_add_logical_processor *input; @@ -137,7 +147,7 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id) input, output); local_irq_restore(flags); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (!hv_result_success(status)) { hv_status_err(status, "cpu %u apic ID: %u\n", lp_index, apic_id); @@ -179,7 +189,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags) status = hv_do_hypercall(HVCALL_CREATE_VP, input, NULL); local_irq_restore(irq_flags); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (!hv_result_success(status)) { hv_status_err(status, "vcpu: %u, lp: %u\n", vp_index, flags); diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c index 598eaff4ff29..89afeeda21dd 100644 --- a/drivers/hv/mshv_root_hv_call.c +++ b/drivers/hv/mshv_root_hv_call.c @@ -115,7 +115,7 @@ int hv_call_create_partition(u64 flags, status = hv_do_hypercall(HVCALL_CREATE_PARTITION, input, output); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (hv_result_success(status)) *partition_id = output->partition_id; local_irq_restore(irq_flags); @@ -147,7 +147,7 @@ int hv_call_initialize_partition(u64 partition_id) status = hv_do_fast_hypercall8(HVCALL_INITIALIZE_PARTITION, *(u64 *)&input); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); break; } @@ -239,7 +239,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 page_struct_count, completed = hv_repcomp(status); - if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) { + if (hv_result_needs_memory(status)) { ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, HV_MAP_GPA_DEPOSIT_PAGES); if (ret) @@ -455,7 +455,7 @@ int hv_call_get_vp_state(u32 vp_index, u64 partition_id, status = hv_do_hypercall(control, input, output); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (hv_result_success(status) && ret_output) memcpy(ret_output, output, sizeof(*output)); @@ -518,7 +518,7 @@ int hv_call_set_vp_state(u32 vp_index, u64 partition_id, status = hv_do_hypercall(control, input, NULL); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { local_irq_restore(flags); ret = hv_result_to_errno(status); break; @@ -563,7 +563,7 @@ static int hv_call_map_vp_state_page(u64 partition_id, u32 vp_index, u32 type, status = hv_do_hypercall(HVCALL_MAP_VP_STATE_PAGE, input, output); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (hv_result_success(status)) *state_page = pfn_to_page(output->map_location); local_irq_restore(flags); @@ -718,7 +718,7 @@ hv_call_create_port(u64 port_partition_id, union hv_port_id port_id, if (hv_result_success(status)) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); break; } @@ -772,7 +772,7 @@ hv_call_connect_port(u64 port_partition_id, union hv_port_id port_id, if (hv_result_success(status)) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); break; } @@ -843,7 +843,7 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type, if (!ret) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { hv_status_debug(status, "\n"); break; } @@ -878,7 +878,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type, pfn = output->map_location; local_irq_restore(flags); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); if (hv_result_success(status)) break; diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index 6a6bf641b352..ee30bfa6bb2e 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -261,7 +261,7 @@ static int mshv_ioctl_passthru_hvcall(struct mshv_partition *partition, if (hv_result_success(status)) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) + if (!hv_result_needs_memory(status)) ret = hv_result_to_errno(status); else ret = hv_call_deposit_pages(NUMA_NO_NODE, diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index ecedab554c80..452426d5b2ab 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -342,6 +342,8 @@ static inline bool hv_parent_partition(void) { return hv_root_partition() || hv_l1vh_partition(); } + +bool hv_result_needs_memory(u64 status); int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -350,6 +352,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); static inline bool hv_root_partition(void) { return false; } static inline bool hv_l1vh_partition(void) { return false; } static inline bool hv_parent_partition(void) { return false; } +static inline bool hv_result_needs_memory(u64 status) { return false; } static inline int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) { return -EOPNOTSUPP;
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:58:57 +0000", "thread_id": "177005514902.120041.13078117373390753930.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
Introduce hv_deposit_memory_node() and hv_deposit_memory() helper functions to handle memory deposition with proper error handling. The new hv_deposit_memory_node() function takes the hypervisor status as a parameter and validates it before depositing pages. It checks for HV_STATUS_INSUFFICIENT_MEMORY specifically and returns an error for unexpected status codes. This is a precursor patch to new out-of-memory error codes support. No functional changes intended. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_proc.c | 22 ++++++++++++++++++++-- drivers/hv/mshv_root_hv_call.c | 25 +++++++++---------------- drivers/hv/mshv_root_main.c | 3 +-- include/asm-generic/mshyperv.h | 10 ++++++++++ 4 files changed, 40 insertions(+), 20 deletions(-) diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index e53204b9e05d..ffa25cd6e4e9 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -110,6 +110,23 @@ int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) } EXPORT_SYMBOL_GPL(hv_call_deposit_pages); +int hv_deposit_memory_node(int node, u64 partition_id, + u64 hv_status) +{ + u32 num_pages; + + switch (hv_result(hv_status)) { + case HV_STATUS_INSUFFICIENT_MEMORY: + num_pages = 1; + break; + default: + hv_status_err(hv_status, "Unexpected!\n"); + return -ENOMEM; + } + return hv_call_deposit_pages(node, partition_id, num_pages); +} +EXPORT_SYMBOL_GPL(hv_deposit_memory_node); + bool hv_result_needs_memory(u64 status) { switch (hv_result(status)) { @@ -155,7 +172,8 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id) } break; } - ret = hv_call_deposit_pages(node, hv_current_partition_id, 1); + ret = hv_deposit_memory_node(node, hv_current_partition_id, + status); } while (!ret); return ret; @@ -197,7 +215,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags) } break; } - ret = hv_call_deposit_pages(node, partition_id, 1); + ret = hv_deposit_memory_node(node, partition_id, status); } while (!ret); diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c index 89afeeda21dd..174431cb5e0e 100644 --- a/drivers/hv/mshv_root_hv_call.c +++ b/drivers/hv/mshv_root_hv_call.c @@ -123,8 +123,7 @@ int hv_call_create_partition(u64 flags, break; } local_irq_restore(irq_flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, - hv_current_partition_id, 1); + ret = hv_deposit_memory(hv_current_partition_id, status); } while (!ret); return ret; @@ -151,7 +150,7 @@ int hv_call_initialize_partition(u64 partition_id) ret = hv_result_to_errno(status); break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -465,8 +464,7 @@ int hv_call_get_vp_state(u32 vp_index, u64 partition_id, } local_irq_restore(flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, - partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -525,8 +523,7 @@ int hv_call_set_vp_state(u32 vp_index, u64 partition_id, } local_irq_restore(flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, - partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -573,7 +570,7 @@ static int hv_call_map_vp_state_page(u64 partition_id, u32 vp_index, u32 type, local_irq_restore(flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -722,8 +719,7 @@ hv_call_create_port(u64 port_partition_id, union hv_port_id port_id, ret = hv_result_to_errno(status); break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, port_partition_id, 1); - + ret = hv_deposit_memory(port_partition_id, status); } while (!ret); return ret; @@ -776,8 +772,7 @@ hv_call_connect_port(u64 port_partition_id, union hv_port_id port_id, ret = hv_result_to_errno(status); break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, - connection_partition_id, 1); + ret = hv_deposit_memory(connection_partition_id, status); } while (!ret); return ret; @@ -848,8 +843,7 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type, break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, - hv_current_partition_id, 1); + ret = hv_deposit_memory(hv_current_partition_id, status); } while (!ret); return ret; @@ -885,8 +879,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type, return ret; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, - hv_current_partition_id, 1); + ret = hv_deposit_memory(hv_current_partition_id, status); if (ret) return ret; } while (!ret); diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index ee30bfa6bb2e..dce255c94f9e 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -264,8 +264,7 @@ static int mshv_ioctl_passthru_hvcall(struct mshv_partition *partition, if (!hv_result_needs_memory(status)) ret = hv_result_to_errno(status); else - ret = hv_call_deposit_pages(NUMA_NO_NODE, - pt_id, 1); + ret = hv_deposit_memory(pt_id, status); } while (!ret); args.status = hv_result(status); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 452426d5b2ab..d37b68238c97 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -344,6 +344,7 @@ static inline bool hv_parent_partition(void) } bool hv_result_needs_memory(u64 status); +int hv_deposit_memory_node(int node, u64 partition_id, u64 status); int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -353,6 +354,10 @@ static inline bool hv_root_partition(void) { return false; } static inline bool hv_l1vh_partition(void) { return false; } static inline bool hv_parent_partition(void) { return false; } static inline bool hv_result_needs_memory(u64 status) { return false; } +static inline int hv_deposit_memory_node(int node, u64 partition_id, u64 status) +{ + return -EOPNOTSUPP; +} static inline int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) { return -EOPNOTSUPP; @@ -367,6 +372,11 @@ static inline int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u3 } #endif /* CONFIG_MSHV_ROOT */ +static inline int hv_deposit_memory(u64 partition_id, u64 status) +{ + return hv_deposit_memory_node(NUMA_NO_NODE, partition_id, status); +} + #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE) u8 __init get_vtl(void); #else
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:59:03 +0000", "thread_id": "177005514902.120041.13078117373390753930.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
The HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY status indicates that the hypervisor lacks sufficient contiguous memory for its internal allocations. When this status is encountered, allocate and deposit HV_MAX_CONTIGUOUS_ALLOCATION_PAGES contiguous pages to the hypervisor. HV_MAX_CONTIGUOUS_ALLOCATION_PAGES is defined in the hypervisor headers, a deposit of this size will always satisfy the hypervisor's requirements. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_common.c | 1 + drivers/hv/hv_proc.c | 4 ++++ include/hyperv/hvgdk_mini.h | 1 + include/hyperv/hvhdk_mini.h | 2 ++ 4 files changed, 8 insertions(+) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 0a3ab7efed46..c7f63c9de503 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -791,6 +791,7 @@ static const struct hv_status_info hv_status_infos[] = { _STATUS_INFO(HV_STATUS_UNKNOWN_PROPERTY, -EIO), _STATUS_INFO(HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE, -EIO), _STATUS_INFO(HV_STATUS_INSUFFICIENT_MEMORY, -ENOMEM), + _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY, -ENOMEM), _STATUS_INFO(HV_STATUS_INVALID_PARTITION_ID, -EINVAL), _STATUS_INFO(HV_STATUS_INVALID_VP_INDEX, -EINVAL), _STATUS_INFO(HV_STATUS_NOT_FOUND, -EIO), diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index ffa25cd6e4e9..dfa27be66ff7 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -119,6 +119,9 @@ int hv_deposit_memory_node(int node, u64 partition_id, case HV_STATUS_INSUFFICIENT_MEMORY: num_pages = 1; break; + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: + num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES; + break; default: hv_status_err(hv_status, "Unexpected!\n"); return -ENOMEM; @@ -131,6 +134,7 @@ bool hv_result_needs_memory(u64 status) { switch (hv_result(status)) { case HV_STATUS_INSUFFICIENT_MEMORY: + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: return true; } return false; diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h index 04b18d0e37af..70f22ef44948 100644 --- a/include/hyperv/hvgdk_mini.h +++ b/include/hyperv/hvgdk_mini.h @@ -38,6 +38,7 @@ struct hv_u128 { #define HV_STATUS_INVALID_LP_INDEX 0x41 #define HV_STATUS_INVALID_REGISTER_VALUE 0x50 #define HV_STATUS_OPERATION_FAILED 0x71 +#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75 #define HV_STATUS_TIME_OUT 0x78 #define HV_STATUS_CALL_PENDING 0x79 #define HV_STATUS_VTL_ALREADY_ENABLED 0x86 diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h index c0300910808b..091c03e26046 100644 --- a/include/hyperv/hvhdk_mini.h +++ b/include/hyperv/hvhdk_mini.h @@ -7,6 +7,8 @@ #include "hvgdk_mini.h" +#define HV_MAX_CONTIGUOUS_ALLOCATION_PAGES 8 + /* * Doorbell connection_info flags. */
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:59:09 +0000", "thread_id": "177005514902.120041.13078117373390753930.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
When creating guest partition objects, the hypervisor may fail to allocate root partition pages and return an insufficient memory status. In this case, deposit memory using the root partition ID instead. Note: This error should never occur in a guest of L1VH partition context. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_common.c | 2 + drivers/hv/hv_proc.c | 14 ++++++++++ include/hyperv/hvgdk_mini.h | 58 ++++++++++++++++++++++--------------------- 3 files changed, 46 insertions(+), 28 deletions(-) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index c7f63c9de503..cab0d1733607 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -792,6 +792,8 @@ static const struct hv_status_info hv_status_infos[] = { _STATUS_INFO(HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE, -EIO), _STATUS_INFO(HV_STATUS_INSUFFICIENT_MEMORY, -ENOMEM), _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY, -ENOMEM), + _STATUS_INFO(HV_STATUS_INSUFFICIENT_ROOT_MEMORY, -ENOMEM), + _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY, -ENOMEM), _STATUS_INFO(HV_STATUS_INVALID_PARTITION_ID, -EINVAL), _STATUS_INFO(HV_STATUS_INVALID_VP_INDEX, -EINVAL), _STATUS_INFO(HV_STATUS_NOT_FOUND, -EIO), diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index dfa27be66ff7..935129e0b39d 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -122,6 +122,18 @@ int hv_deposit_memory_node(int node, u64 partition_id, case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES; break; + + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY: + num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES; + fallthrough; + case HV_STATUS_INSUFFICIENT_ROOT_MEMORY: + if (!hv_root_partition()) { + hv_status_err(hv_status, "Unexpected root memory deposit\n"); + return -ENOMEM; + } + partition_id = HV_PARTITION_ID_SELF; + break; + default: hv_status_err(hv_status, "Unexpected!\n"); return -ENOMEM; @@ -135,6 +147,8 @@ bool hv_result_needs_memory(u64 status) switch (hv_result(status)) { case HV_STATUS_INSUFFICIENT_MEMORY: case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: + case HV_STATUS_INSUFFICIENT_ROOT_MEMORY: + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY: return true; } return false; diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h index 70f22ef44948..5b74a857ef43 100644 --- a/include/hyperv/hvgdk_mini.h +++ b/include/hyperv/hvgdk_mini.h @@ -14,34 +14,36 @@ struct hv_u128 { } __packed; /* NOTE: when adding below, update hv_result_to_string() */ -#define HV_STATUS_SUCCESS 0x0 -#define HV_STATUS_INVALID_HYPERCALL_CODE 0x2 -#define HV_STATUS_INVALID_HYPERCALL_INPUT 0x3 -#define HV_STATUS_INVALID_ALIGNMENT 0x4 -#define HV_STATUS_INVALID_PARAMETER 0x5 -#define HV_STATUS_ACCESS_DENIED 0x6 -#define HV_STATUS_INVALID_PARTITION_STATE 0x7 -#define HV_STATUS_OPERATION_DENIED 0x8 -#define HV_STATUS_UNKNOWN_PROPERTY 0x9 -#define HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE 0xA -#define HV_STATUS_INSUFFICIENT_MEMORY 0xB -#define HV_STATUS_INVALID_PARTITION_ID 0xD -#define HV_STATUS_INVALID_VP_INDEX 0xE -#define HV_STATUS_NOT_FOUND 0x10 -#define HV_STATUS_INVALID_PORT_ID 0x11 -#define HV_STATUS_INVALID_CONNECTION_ID 0x12 -#define HV_STATUS_INSUFFICIENT_BUFFERS 0x13 -#define HV_STATUS_NOT_ACKNOWLEDGED 0x14 -#define HV_STATUS_INVALID_VP_STATE 0x15 -#define HV_STATUS_NO_RESOURCES 0x1D -#define HV_STATUS_PROCESSOR_FEATURE_NOT_SUPPORTED 0x20 -#define HV_STATUS_INVALID_LP_INDEX 0x41 -#define HV_STATUS_INVALID_REGISTER_VALUE 0x50 -#define HV_STATUS_OPERATION_FAILED 0x71 -#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75 -#define HV_STATUS_TIME_OUT 0x78 -#define HV_STATUS_CALL_PENDING 0x79 -#define HV_STATUS_VTL_ALREADY_ENABLED 0x86 +#define HV_STATUS_SUCCESS 0x0 +#define HV_STATUS_INVALID_HYPERCALL_CODE 0x2 +#define HV_STATUS_INVALID_HYPERCALL_INPUT 0x3 +#define HV_STATUS_INVALID_ALIGNMENT 0x4 +#define HV_STATUS_INVALID_PARAMETER 0x5 +#define HV_STATUS_ACCESS_DENIED 0x6 +#define HV_STATUS_INVALID_PARTITION_STATE 0x7 +#define HV_STATUS_OPERATION_DENIED 0x8 +#define HV_STATUS_UNKNOWN_PROPERTY 0x9 +#define HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE 0xA +#define HV_STATUS_INSUFFICIENT_MEMORY 0xB +#define HV_STATUS_INVALID_PARTITION_ID 0xD +#define HV_STATUS_INVALID_VP_INDEX 0xE +#define HV_STATUS_NOT_FOUND 0x10 +#define HV_STATUS_INVALID_PORT_ID 0x11 +#define HV_STATUS_INVALID_CONNECTION_ID 0x12 +#define HV_STATUS_INSUFFICIENT_BUFFERS 0x13 +#define HV_STATUS_NOT_ACKNOWLEDGED 0x14 +#define HV_STATUS_INVALID_VP_STATE 0x15 +#define HV_STATUS_NO_RESOURCES 0x1D +#define HV_STATUS_PROCESSOR_FEATURE_NOT_SUPPORTED 0x20 +#define HV_STATUS_INVALID_LP_INDEX 0x41 +#define HV_STATUS_INVALID_REGISTER_VALUE 0x50 +#define HV_STATUS_OPERATION_FAILED 0x71 +#define HV_STATUS_INSUFFICIENT_ROOT_MEMORY 0x73 +#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75 +#define HV_STATUS_TIME_OUT 0x78 +#define HV_STATUS_CALL_PENDING 0x79 +#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY 0x83 +#define HV_STATUS_VTL_ALREADY_ENABLED 0x86 /* * The Hyper-V TimeRefCount register and the TSC
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:59:14 +0000", "thread_id": "177005514902.120041.13078117373390753930.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
The page allocated in io_mem_alloc_compound() is actually used as a folio later in io_region_mmap(). So allocate a folio instead of a compound page and rename io_mem_alloc_compound() to io_mem_alloc_folio(). This prepares for code separation of compound page and folio in a follow-up commit. Signed-off-by: Zi Yan <ziy@nvidia.com> --- io_uring/memmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/io_uring/memmap.c b/io_uring/memmap.c index 7d3c5eb58480..8ed8a78d71cc 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -15,10 +15,10 @@ #include "rsrc.h" #include "zcrx.h" -static bool io_mem_alloc_compound(struct page **pages, int nr_pages, +static bool io_mem_alloc_folio(struct page **pages, int nr_pages, size_t size, gfp_t gfp) { - struct page *page; + struct folio *folio; int i, order; order = get_order(size); @@ -27,12 +27,12 @@ static bool io_mem_alloc_compound(struct page **pages, int nr_pages, else if (order) gfp |= __GFP_COMP; - page = alloc_pages(gfp, order); - if (!page) + folio = folio_alloc(gfp, order); + if (!folio) return false; for (i = 0; i < nr_pages; i++) - pages[i] = page + i; + pages[i] = folio_page(folio, i); return true; } @@ -162,7 +162,7 @@ static int io_region_allocate_pages(struct io_mapped_region *mr, if (!pages) return -ENOMEM; - if (io_mem_alloc_compound(pages, mr->nr_pages, size, gfp)) { + if (io_mem_alloc_folio(pages, mr->nr_pages, size, gfp)) { mr->flags |= IO_REGION_F_SINGLE_REF; goto done; } -- 2.51.0
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 29 Jan 2026 22:48:14 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
Current code uses folio_set_large_rmappable() on after-split folios, but these folios should be treated as compound pages and converted to folios with page_rmappable_folio(). This prepares for code separation of compound page and folio in a follow-up commit. Signed-off-by: Zi Yan <ziy@nvidia.com> --- mm/huge_memory.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 44ff8a648afd..74ba076e3fc0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3558,10 +3558,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order, * which needs correct compound_head(). */ clear_compound_head(new_head); - if (new_order) { + if (new_order) prep_compound_page(new_head, new_order); - folio_set_large_rmappable(new_folio); - } + page_rmappable_folio(new_head); if (folio_test_young(folio)) folio_set_young(new_folio); -- 2.51.0
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 29 Jan 2026 22:48:15 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
Commit f708f6970cc9 ("mm/hugetlb: fix kernel NULL pointer dereference when migrating hugetlb folio") fixed a NULL pointer dereference when folio_undo_large_rmappable(), now folio_unqueue_deferred_list(), is used on hugetlb to clear deferred_list. It cleared large_rmappable flag on hugetlb. hugetlb is rmappable, thus clearing large_rmappable flag looks misleading. Instead, reject hugetlb in folio_unqueue_deferred_list() to avoid the issue. This prepares for code separation of compound page and folio in a follow-up commit. Signed-off-by: Zi Yan <ziy@nvidia.com> --- mm/hugetlb.c | 6 +++--- mm/hugetlb_cma.c | 2 +- mm/internal.h | 3 ++- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6e855a32de3d..7466c7bf41a1 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1422,8 +1422,8 @@ static struct folio *alloc_gigantic_frozen_folio(int order, gfp_t gfp_mask, if (hugetlb_cma_exclusive_alloc()) return NULL; - folio = (struct folio *)alloc_contig_frozen_pages(1 << order, gfp_mask, - nid, nodemask); + folio = page_rmappable_folio(alloc_contig_frozen_pages(1 << order, gfp_mask, + nid, nodemask)); return folio; } #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE || !CONFIG_CONTIG_ALLOC */ @@ -1859,7 +1859,7 @@ static struct folio *alloc_buddy_frozen_folio(int order, gfp_t gfp_mask, if (alloc_try_hard) gfp_mask |= __GFP_RETRY_MAYFAIL; - folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask); + folio = page_rmappable_folio(__alloc_frozen_pages(gfp_mask, order, nid, nmask)); /* * If we did not specify __GFP_RETRY_MAYFAIL, but still got a diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c index f83ae4998990..4245b5dda4dc 100644 --- a/mm/hugetlb_cma.c +++ b/mm/hugetlb_cma.c @@ -51,7 +51,7 @@ struct folio *hugetlb_cma_alloc_frozen_folio(int order, gfp_t gfp_mask, if (!page) return NULL; - folio = page_folio(page); + folio = page_rmappable_folio(page); folio_set_hugetlb_cma(folio); return folio; } diff --git a/mm/internal.h b/mm/internal.h index d67e8bb75734..8bb22fb9a0e1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -835,7 +835,8 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) bool __folio_unqueue_deferred_split(struct folio *folio); static inline bool folio_unqueue_deferred_split(struct folio *folio) { - if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio)) + if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio) || + folio_test_hugetlb(folio)) return false; /* -- 2.51.0
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 29 Jan 2026 22:48:16 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
A compound page is not a folio. Using struct folio in compound_nr() and compound_order() is misleading. Use struct page and refer to the right subpage of a compound page to set compound page order. compound_nr() is calculated using compound_order() instead of reading folio->_nr_pages. Signed-off-by: Zi Yan <ziy@nvidia.com> --- include/linux/mm.h | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f8a8fd47399c..f1c54d9f4620 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1428,11 +1428,9 @@ static inline unsigned long folio_large_nr_pages(const struct folio *folio) */ static inline unsigned int compound_order(const struct page *page) { - const struct folio *folio = (struct folio *)page; - - if (!test_bit(PG_head, &folio->flags.f)) + if (!test_bit(PG_head, &page->flags.f)) return 0; - return folio_large_order(folio); + return page[1].flags.f & 0xffUL; } /** @@ -2514,11 +2512,9 @@ static inline unsigned long folio_nr_pages(const struct folio *folio) */ static inline unsigned long compound_nr(const struct page *page) { - const struct folio *folio = (struct folio *)page; - - if (!test_bit(PG_head, &folio->flags.f)) + if (!test_bit(PG_head, &page->flags.f)) return 1; - return folio_large_nr_pages(folio); + return 1 << compound_order(page); } /** -- 2.51.0
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 29 Jan 2026 22:48:17 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
A compound page is not a folio. Using struct folio in prep_compound_head() causes confusion, since the input page is not a folio. The compound page to folio conversion happens in page_rmappable_folio(). So move folio code from prep_compound_head() to page_rmappable_folio(). After the change, a compound page no longer has the following folio field set: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. The page freeing path for compound pages does not need to check these fields and now just checks ->mapping == TAIL_MAPPING for all subpages. So free_tail_page_prepare() has a new large_rmappable input to distinguish between a compound page and a folio. Signed-off-by: Zi Yan <ziy@nvidia.com> --- mm/hugetlb.c | 2 +- mm/internal.h | 44 ++++++++++++++++++++++++++------------------ mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 4 files changed, 46 insertions(+), 25 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7466c7bf41a1..231c91c3d93b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3204,7 +3204,7 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio, ret = folio_ref_freeze(folio, 1); VM_BUG_ON(!ret); hugetlb_folio_init_tail_vmemmap(folio, 1, nr_pages); - prep_compound_head(&folio->page, huge_page_order(h)); + set_compound_order(&folio->page, huge_page_order(h)); } static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m) diff --git a/mm/internal.h b/mm/internal.h index 8bb22fb9a0e1..4d72e915d623 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -854,30 +854,38 @@ static inline struct folio *page_rmappable_folio(struct page *page) { struct folio *folio = (struct folio *)page; - if (folio && folio_test_large(folio)) + if (folio && folio_test_large(folio)) { + unsigned int order = compound_order(page); + +#ifdef NR_PAGES_IN_LARGE_FOLIO + folio->_nr_pages = 1U << order; +#endif + atomic_set(&folio->_large_mapcount, -1); + if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) + atomic_set(&folio->_nr_pages_mapped, 0); + if (IS_ENABLED(CONFIG_MM_ID)) { + folio->_mm_ids = 0; + folio->_mm_id_mapcount[0] = -1; + folio->_mm_id_mapcount[1] = -1; + } + if (IS_ENABLED(CONFIG_64BIT) || order > 1) { + atomic_set(&folio->_pincount, 0); + atomic_set(&folio->_entire_mapcount, -1); + } + if (order > 1) + INIT_LIST_HEAD(&folio->_deferred_list); folio_set_large_rmappable(folio); + } return folio; } -static inline void prep_compound_head(struct page *page, unsigned int order) +static inline void set_compound_order(struct page *page, unsigned int order) { - struct folio *folio = (struct folio *)page; + if (WARN_ON_ONCE(!order || !PageHead(page))) + return; + VM_WARN_ON_ONCE(order > MAX_FOLIO_ORDER); - folio_set_order(folio, order); - atomic_set(&folio->_large_mapcount, -1); - if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) - atomic_set(&folio->_nr_pages_mapped, 0); - if (IS_ENABLED(CONFIG_MM_ID)) { - folio->_mm_ids = 0; - folio->_mm_id_mapcount[0] = -1; - folio->_mm_id_mapcount[1] = -1; - } - if (IS_ENABLED(CONFIG_64BIT) || order > 1) { - atomic_set(&folio->_pincount, 0); - atomic_set(&folio->_entire_mapcount, -1); - } - if (order > 1) - INIT_LIST_HEAD(&folio->_deferred_list); + page[1].flags.f = (page[1].flags.f & ~0xffUL) | order; } static inline void prep_compound_tail(struct page *head, int tail_idx) diff --git a/mm/mm_init.c b/mm/mm_init.c index 1a29a719af58..23a42a4af77b 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1102,7 +1102,7 @@ static void __ref memmap_init_compound(struct page *head, prep_compound_tail(head, pfn - head_pfn); set_page_count(page, 0); } - prep_compound_head(head, order); + set_compound_order(head, order); } void __ref memmap_init_zone_device(struct zone *zone, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e4104973e22f..2194a6b3a062 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -746,7 +746,7 @@ void prep_compound_page(struct page *page, unsigned int order) for (i = 1; i < nr_pages; i++) prep_compound_tail(page, i); - prep_compound_head(page, order); + set_compound_order(page, order); } static inline void set_buddy_order(struct page *page, unsigned int order) @@ -1126,7 +1126,8 @@ static inline bool is_check_pages_enabled(void) return static_branch_unlikely(&check_pages_enabled); } -static int free_tail_page_prepare(struct page *head_page, struct page *page) +static int free_tail_page_prepare(struct page *head_page, struct page *page, + bool large_rmappable) { struct folio *folio = (struct folio *)head_page; int ret = 1; @@ -1141,6 +1142,13 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) ret = 0; goto out; } + if (!large_rmappable) { + if (page->mapping != TAIL_MAPPING) { + bad_page(page, "corrupted mapping in compound page's tail page"); + goto out; + } + goto skip_rmappable_checks; + } switch (page - head_page) { case 1: /* the first tail page: these may be in place of ->mapping */ @@ -1198,11 +1206,12 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) fallthrough; default: if (page->mapping != TAIL_MAPPING) { - bad_page(page, "corrupted mapping in tail page"); + bad_page(page, "corrupted mapping in folio's tail page"); goto out; } break; } +skip_rmappable_checks: if (unlikely(!PageTail(page))) { bad_page(page, "PageTail not set"); goto out; @@ -1392,17 +1401,21 @@ __always_inline bool free_pages_prepare(struct page *page, * avoid checking PageCompound for order-0 pages. */ if (unlikely(order)) { + bool large_rmappable = false; int i; if (compound) { + large_rmappable = folio_test_large_rmappable(folio); + /* clear compound order */ page[1].flags.f &= ~PAGE_FLAGS_SECOND; #ifdef NR_PAGES_IN_LARGE_FOLIO - folio->_nr_pages = 0; + if (large_rmappable) + folio->_nr_pages = 0; #endif } for (i = 1; i < (1 << order); i++) { if (compound) - bad += free_tail_page_prepare(page, page + i); + bad += free_tail_page_prepare(page, page + i, large_rmappable); if (is_check_pages_enabled()) { if (free_page_is_bad(page + i)) { bad++; -- 2.51.0
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Thu, 29 Jan 2026 22:48:18 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
syzbot ci has tested the following series [v1] Separate compound page from folio https://lore.kernel.org/all/20260130034818.472804-1-ziy@nvidia.com * [RFC PATCH 1/5] io_uring: allocate folio in io_mem_alloc_compound() and function rename * [RFC PATCH 2/5] mm/huge_memory: use page_rmappable_folio() to convert after-split folios * [RFC PATCH 3/5] mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling * [RFC PATCH 4/5] mm: only use struct page in compound_nr() and compound_order() * [RFC PATCH 5/5] mm: code separation for compound page and folio and found the following issue: WARNING in __folio_large_mapcount_sanity_checks Full report is available here: https://ci.syzbot.org/series/f64f0297-d388-4cfa-b3be-f05819d0ce34 *** WARNING in __folio_large_mapcount_sanity_checks tree: mm-new URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git base: 0241748f8b68fc2bf637f4901b9d7ca660d177ca arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/76dc5ea6-0ff5-410b-8b1f-72e5607a704e/config C repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/c_repro syz repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/syz_repro ------------[ cut here ]------------ diff > folio_large_nr_pages(folio) WARNING: ./include/linux/rmap.h:148 at __folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148, CPU#1: syz.0.17/5988 Modules linked in: CPU: 1 UID: 0 PID: 5988 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:__folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148 Code: 5f 5d e9 4a 4e 64 09 cc e8 84 d8 aa ff 90 0f 0b 90 e9 82 fc ff ff e8 76 d8 aa ff 90 0f 0b 90 e9 8f fc ff ff e8 68 d8 aa ff 90 <0f> 0b 90 e9 b8 fc ff ff e8 5a d8 aa ff 90 0f 0b 90 e9 f2 fc ff ff RSP: 0018:ffffc900040e72f8 EFLAGS: 00010293 RAX: ffffffff8217c0f8 RBX: ffffea0006ef5c00 RCX: ffff888105fdba80 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000 RBP: 0000000000000001 R08: ffffea0006ef5c07 R09: 1ffffd4000ddeb80 R10: dffffc0000000000 R11: fffff94000ddeb81 R12: 0000000000000001 R13: 0000000000000000 R14: 1ffffd4000ddeb8f R15: ffffea0006ef5c78 FS: 00005555867b3500(0000) GS:ffff8882a9923000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00002000000000c0 CR3: 0000000103ab0000 CR4: 00000000000006f0 Call Trace: <TASK> folio_add_return_large_mapcount include/linux/rmap.h:184 [inline] __folio_add_rmap mm/rmap.c:1377 [inline] __folio_add_file_rmap mm/rmap.c:1696 [inline] folio_add_file_rmap_ptes+0x4c2/0xe60 mm/rmap.c:1722 insert_page_into_pte_locked+0x5ab/0x910 mm/memory.c:2378 insert_page+0x186/0x2d0 mm/memory.c:2398 packet_mmap+0x360/0x530 net/packet/af_packet.c:4622 vfs_mmap include/linux/fs.h:2053 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2468 [inline] __mmap_new_vma mm/vma.c:2532 [inline] __mmap_region mm/vma.c:2759 [inline] mmap_region+0x18fe/0x2240 mm/vma.c:2837 do_mmap+0xc39/0x10c0 mm/mmap.c:559 vm_mmap_pgoff+0x2c9/0x4f0 mm/util.c:581 ksys_mmap_pgoff+0x51e/0x760 mm/mmap.c:605 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f5d7399acb9 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe9f3eea78 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f5d73c15fa0 RCX: 00007f5d7399acb9 RDX: 0000000000000002 RSI: 0000000000030000 RDI: 0000200000000000 RBP: 00007f5d73a08bf7 R08: 0000000000000003 R09: 0000000000000000 R10: 0000000000000011 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f5d73c15fac R14: 00007f5d73c15fa0 R15: 00007f5d73c15fa0 </TASK> *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
{ "author": "syzbot ci <syzbot+ci7f632827e1b1c91b@syzkaller.appspotmail.com>", "date": "Fri, 30 Jan 2026 00:15:47 -0800", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
On 30 Jan 2026, at 3:15, syzbot ci wrote: The issue comes from alloc_one_pg_vec_page() in net/packet/af_packet.c. It allocates a compound page with __GFP_COMP, but latter does vm_insert_page() in packet_mmap(), using it as a folio. The fix below is a hack. We will need a get_free_folios() instead. I will check all __GFP_COMP callers to find out which ones are using it as a folio and which ones are using it as a compound page. I suspect most are using it as a folio. #syz test diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2194a6b3a062..90858d20dfbe 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5311,6 +5311,8 @@ unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order) page = alloc_pages_noprof(gfp_mask & ~__GFP_HIGHMEM, order); if (!page) return 0; + if (gfp_mask & __GFP_COMP) + return (unsigned long)folio_address(page_rmappable_folio(page)); return (unsigned long) page_address(page); } EXPORT_SYMBOL(get_free_pages_noprof); Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Fri, 30 Jan 2026 11:39:40 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
On 2026/1/30 11:48, Zi Yan wrote: Nit: Since we're switching to folio_alloc(), which already adds __GFP_COMP internally, the "else if (order)" part above can be dropped while at it. IIUC, for order == 0, __GFP_COMP gets ignored anyway: - prep_new_page() won't call prep_compound_page() (since order is zero) - page_rmappable_folio() sees a non-compound page and does nothing So no behavior change there :)
{ "author": "Lance Yang <lance.yang@linux.dev>", "date": "Sat, 31 Jan 2026 23:30:35 +0800", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
On 31 Jan 2026, at 10:30, Lance Yang wrote: Sure. Will update it in the next version. Thanks. -- Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Sat, 31 Jan 2026 21:04:53 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
On 1/30/26 11:48 AM, Zi Yan wrote: IIUC, this will break the semantics of the is_transparent_hugepage() and might trigger a split of a hugetlb folio, right? static inline bool is_transparent_hugepage(const struct folio *folio) { if (!folio_test_large(folio)) return false; return is_huge_zero_folio(folio) || folio_test_large_rmappable(folio); }
{ "author": "Baolin Wang <baolin.wang@linux.alibaba.com>", "date": "Mon, 2 Feb 2026 11:59:39 +0800", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[RFC PATCH 0/5] Separate compound page from folio
Hi all, Based on my discussion with Jason about device private folio reinitialization[1], I realize that the concepts of compound page and folio are mixed together and confusing, as people think a compound page is equal to a folio. This is not true, since a compound page means a group of pages is managed as a whole and it can be something other than a folio, for example, a slab page. To avoid further confusing people, this patchset separates compound page from folio by moving any folio related code out of compound page functions. The code is on top of mm-new (2026-01-28-20-27) and all mm selftests passed. The key change is that a compound page no longer sets: 1. folio->_nr_pages, 2. folio->_large_mapcount, 3. folio->_nr_pages_mapped, 4. folio->_mm_ids, 5. folio->_mm_id_mapcount, 6. folio->_pincount, 7. folio->_entire_mapcount, 8. folio->_deferred_list. Since these fields are only used by folios that are rmappable. The code setting these fields is moved to page_rmappable_folio(). To make the code move, this patchset also needs to changes several places, where folio and compound page are used interchangably or unusual folio use: 1. in io_mem_alloc_compound(), a compound page is allocated, but later it is mapped via vm_insert_pages() like a rmappable folio; 2. __split_folio_to_order() sets large_rmappable flag directly instead of using page_rmappable_folio() for after-split folios; 3. hugetlb unsets large_rmappable to escape deferred_list unqueue operation. At last, the page freeing path is also changed to have different checks for compound page and folio. One thing to note is that for compound page, I do not store compound order in folio->_nr_pages, which overlaps with page[1].memcg_data and use 1 << compound_order() instead, since I do not want to add a new union to struct page and compound_nr() is not as widely used as folio_nr_pages(). But let me know if there is a performance concern for this. Comments and suggestions are welcome. Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1] Zi Yan (5): io_uring: allocate folio in io_mem_alloc_compound() and function rename mm/huge_memory: use page_rmappable_folio() to convert after-split folios mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling mm: only use struct page in compound_nr() and compound_order() mm: code separation for compound page and folio include/linux/mm.h | 12 ++++-------- io_uring/memmap.c | 12 ++++++------ mm/huge_memory.c | 5 ++--- mm/hugetlb.c | 8 ++++---- mm/hugetlb_cma.c | 2 +- mm/internal.h | 47 +++++++++++++++++++++++++++------------------- mm/mm_init.c | 2 +- mm/page_alloc.c | 23 ++++++++++++++++++----- 8 files changed, 64 insertions(+), 47 deletions(-) -- 2.51.0
On 1 Feb 2026, at 22:59, Baolin Wang wrote: Oh, I missed this. I will check all folio_test_large_rmappable() callers and filter out hugetlb if necessary. Thank you for pointing this out. Best Regards, Yan, Zi
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Mon, 02 Feb 2026 12:11:45 -0500", "thread_id": "20260130034818.472804-1-ziy@nvidia.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
The "qup-memory" interconnect path is optional and may not be defined in all device trees. Unroll the loop-based ICC path initialization to allow specific error handling for each path type. The "qup-core" and "qup-config" paths remain mandatory and will fail probe if missing, while "qup-memory" is now handled as optional and skipped when not present in the device tree. Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v1->v2: Bjorn: - Updated commit text. - Used local variable for more readable. --- drivers/soc/qcom/qcom-geni-se.c | 36 +++++++++++++++++---------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index cd1779b6a91a..b6167b968ef6 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -899,30 +899,32 @@ EXPORT_SYMBOL_GPL(geni_se_rx_dma_unprep); int geni_icc_get(struct geni_se *se, const char *icc_ddr) { - int i, err; - const char *icc_names[] = {"qup-core", "qup-config", icc_ddr}; + struct geni_icc_path *icc_paths = se->icc_paths; if (has_acpi_companion(se->dev)) return 0; - for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) { - if (!icc_names[i]) - continue; - - se->icc_paths[i].path = devm_of_icc_get(se->dev, icc_names[i]); - if (IS_ERR(se->icc_paths[i].path)) - goto err; + icc_paths[GENI_TO_CORE].path = devm_of_icc_get(se->dev, "qup-core"); + if (IS_ERR(icc_paths[GENI_TO_CORE].path)) + return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_CORE].path), + "Failed to get 'qup-core' ICC path\n"); + + icc_paths[CPU_TO_GENI].path = devm_of_icc_get(se->dev, "qup-config"); + if (IS_ERR(icc_paths[CPU_TO_GENI].path)) + return dev_err_probe(se->dev, PTR_ERR(icc_paths[CPU_TO_GENI].path), + "Failed to get 'qup-config' ICC path\n"); + + /* The DDR path is optional, depending on protocol and hw capabilities */ + icc_paths[GENI_TO_DDR].path = devm_of_icc_get(se->dev, "qup-memory"); + if (IS_ERR(icc_paths[GENI_TO_DDR].path)) { + if (PTR_ERR(icc_paths[GENI_TO_DDR].path) == -ENODATA) + icc_paths[GENI_TO_DDR].path = NULL; + else + return dev_err_probe(se->dev, PTR_ERR(icc_paths[GENI_TO_DDR].path), + "Failed to get 'qup-memory' ICC path\n"); } return 0; - -err: - err = PTR_ERR(se->icc_paths[i].path); - if (err != -EPROBE_DEFER) - dev_err_ratelimited(se->dev, "Failed to get ICC path '%s': %d\n", - icc_names[i], err); - return err; - } EXPORT_SYMBOL_GPL(geni_icc_get); -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:10 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
Add a new function geni_icc_set_bw_ab() that allows callers to set average bandwidth values for all ICC (Interconnect) paths in a single call. This function takes separate parameters for core, config, and DDR average bandwidth values and applies them to the respective ICC paths. This provides a more convenient API for drivers that need to configure specific average bandwidth values. Co-developed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- drivers/soc/qcom/qcom-geni-se.c | 22 ++++++++++++++++++++++ include/linux/soc/qcom/geni-se.h | 1 + 2 files changed, 23 insertions(+) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index b6167b968ef6..b0542f836453 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -946,6 +946,28 @@ int geni_icc_set_bw(struct geni_se *se) } EXPORT_SYMBOL_GPL(geni_icc_set_bw); +/** + * geni_icc_set_bw_ab() - Set average bandwidth for all ICC paths and apply + * @se: Pointer to the concerned serial engine. + * @core_ab: Average bandwidth in kBps for GENI_TO_CORE path. + * @cfg_ab: Average bandwidth in kBps for CPU_TO_GENI path. + * @ddr_ab: Average bandwidth in kBps for GENI_TO_DDR path. + * + * Sets bandwidth values for all ICC paths and applies them. DDR path is + * optional and only set if it exists. + * + * Return: 0 on success, negative error code on failure. + */ +int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab) +{ + se->icc_paths[GENI_TO_CORE].avg_bw = core_ab; + se->icc_paths[CPU_TO_GENI].avg_bw = cfg_ab; + se->icc_paths[GENI_TO_DDR].avg_bw = ddr_ab; + + return geni_icc_set_bw(se); +} +EXPORT_SYMBOL_GPL(geni_icc_set_bw_ab); + void geni_icc_set_tag(struct geni_se *se, u32 tag) { int i; diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h index 0a984e2579fe..980aabea2157 100644 --- a/include/linux/soc/qcom/geni-se.h +++ b/include/linux/soc/qcom/geni-se.h @@ -528,6 +528,7 @@ void geni_se_rx_dma_unprep(struct geni_se *se, dma_addr_t iova, size_t len); int geni_icc_get(struct geni_se *se, const char *icc_ddr); int geni_icc_set_bw(struct geni_se *se); +int geni_icc_set_bw_ab(struct geni_se *se, u32 core_ab, u32 cfg_ab, u32 ddr_ab); void geni_icc_set_tag(struct geni_se *se, u32 tag); int geni_icc_enable(struct geni_se *se); -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:11 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently duplicate code for initializing shared resources such as clocks and interconnect paths. Introduce a new helper API, geni_se_resources_init(), to centralize this initialization logic, improving modularity and simplifying the probe function. Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v1 -> v2: - Updated proper return value for devm_pm_opp_set_clkname() --- drivers/soc/qcom/qcom-geni-se.c | 47 ++++++++++++++++++++++++++++++++ include/linux/soc/qcom/geni-se.h | 6 ++++ 2 files changed, 53 insertions(+) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index b0542f836453..75e722cd1a94 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -19,6 +19,7 @@ #include <linux/of_platform.h> #include <linux/pinctrl/consumer.h> #include <linux/platform_device.h> +#include <linux/pm_opp.h> #include <linux/soc/qcom/geni-se.h> /** @@ -1012,6 +1013,52 @@ int geni_icc_disable(struct geni_se *se) } EXPORT_SYMBOL_GPL(geni_icc_disable); +/** + * geni_se_resources_init() - Initialize resources for a GENI SE device. + * @se: Pointer to the geni_se structure representing the GENI SE device. + * + * This function initializes various resources required by the GENI Serial Engine + * (SE) device, including clock resources (core and SE clocks), interconnect + * paths for communication. + * It retrieves optional and mandatory clock resources, adds an OF-based + * operating performance point (OPP) table, and sets up interconnect paths + * with default bandwidths. The function also sets a flag (`has_opp`) to + * indicate whether OPP support is available for the device. + * + * Return: 0 on success, or a negative errno on failure. + */ +int geni_se_resources_init(struct geni_se *se) +{ + int ret; + + se->core_clk = devm_clk_get_optional(se->dev, "core"); + if (IS_ERR(se->core_clk)) + return dev_err_probe(se->dev, PTR_ERR(se->core_clk), + "Failed to get optional core clk\n"); + + se->clk = devm_clk_get(se->dev, "se"); + if (IS_ERR(se->clk) && !has_acpi_companion(se->dev)) + return dev_err_probe(se->dev, PTR_ERR(se->clk), + "Failed to get SE clk\n"); + + ret = devm_pm_opp_set_clkname(se->dev, "se"); + if (ret) + return ret; + + ret = devm_pm_opp_of_add_table(se->dev); + if (ret && ret != -ENODEV) + return dev_err_probe(se->dev, ret, "Failed to add OPP table\n"); + + se->has_opp = (ret == 0); + + ret = geni_icc_get(se, "qup-memory"); + if (ret) + return ret; + + return geni_icc_set_bw_ab(se, GENI_DEFAULT_BW, GENI_DEFAULT_BW, GENI_DEFAULT_BW); +} +EXPORT_SYMBOL_GPL(geni_se_resources_init); + /** * geni_find_protocol_fw() - Locate and validate SE firmware for a protocol. * @dev: Pointer to the device structure. diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h index 980aabea2157..c182dd0f0bde 100644 --- a/include/linux/soc/qcom/geni-se.h +++ b/include/linux/soc/qcom/geni-se.h @@ -60,18 +60,22 @@ struct geni_icc_path { * @dev: Pointer to the Serial Engine device * @wrapper: Pointer to the parent QUP Wrapper core * @clk: Handle to the core serial engine clock + * @core_clk: Auxiliary clock, which may be required by a protocol * @num_clk_levels: Number of valid clock levels in clk_perf_tbl * @clk_perf_tbl: Table of clock frequency input to serial engine clock * @icc_paths: Array of ICC paths for SE + * @has_opp: Indicates if OPP is supported */ struct geni_se { void __iomem *base; struct device *dev; struct geni_wrapper *wrapper; struct clk *clk; + struct clk *core_clk; unsigned int num_clk_levels; unsigned long *clk_perf_tbl; struct geni_icc_path icc_paths[3]; + bool has_opp; }; /* Common SE registers */ @@ -535,6 +539,8 @@ int geni_icc_enable(struct geni_se *se); int geni_icc_disable(struct geni_se *se); +int geni_se_resources_init(struct geni_se *se); + int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol); #endif #endif -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:12 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
Currently, core clk is handled individually in protocol drivers like the I2C driver. Move this clock management to the common clock APIs (geni_se_clks_on/off) that are already present in the common GENI SE driver to maintain consistency across all protocol drivers. Core clk is now properly managed alongside the other clocks (se->clk and wrapper clocks) in the fundamental clock control functions, eliminating the need for individual protocol drivers to handle this clock separately. Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- drivers/soc/qcom/qcom-geni-se.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index 75e722cd1a94..2e41595ff912 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -583,6 +583,7 @@ static void geni_se_clks_off(struct geni_se *se) clk_disable_unprepare(se->clk); clk_bulk_disable_unprepare(wrapper->num_clks, wrapper->clks); + clk_disable_unprepare(se->core_clk); } /** @@ -619,7 +620,18 @@ static int geni_se_clks_on(struct geni_se *se) ret = clk_prepare_enable(se->clk); if (ret) - clk_bulk_disable_unprepare(wrapper->num_clks, wrapper->clks); + goto err_bulk_clks; + + ret = clk_prepare_enable(se->core_clk); + if (ret) + goto err_se_clk; + + return 0; + +err_se_clk: + clk_disable_unprepare(se->clk); +err_bulk_clks: + clk_bulk_disable_unprepare(wrapper->num_clks, wrapper->clks); return ret; } -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:13 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
The GENI SE protocol drivers (I2C, SPI, UART) implement similar resource activation/deactivation sequences independently, leading to code duplication. Introduce geni_se_resources_activate()/geni_se_resources_deactivate() to power on/off resources.The activate function enables ICC, clocks, and TLMM whereas the deactivate function disables resources in reverse order including OPP rate reset, clocks, ICC and TLMM. Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3 -> v4 Konrad - Removed core clk. v2 -> v3 - Added export symbol for new APIs. v1 -> v2 Bjorn - Updated commit message based code changes. - Removed geni_se_resource_state() API. - Utilized code snippet from geni_se_resources_off() --- drivers/soc/qcom/qcom-geni-se.c | 67 ++++++++++++++++++++++++++++++++ include/linux/soc/qcom/geni-se.h | 4 ++ 2 files changed, 71 insertions(+) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index 2e41595ff912..17ab5bbeb621 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -1025,6 +1025,73 @@ int geni_icc_disable(struct geni_se *se) } EXPORT_SYMBOL_GPL(geni_icc_disable); +/** + * geni_se_resources_deactivate() - Deactivate GENI SE device resources + * @se: Pointer to the geni_se structure + * + * Deactivates device resources for power saving: OPP rate to 0, pin control + * to sleep state, turns off clocks, and disables interconnect. Skips ACPI devices. + * + * Return: 0 on success, negative error code on failure + */ +int geni_se_resources_deactivate(struct geni_se *se) +{ + int ret; + + if (has_acpi_companion(se->dev)) + return 0; + + if (se->has_opp) + dev_pm_opp_set_rate(se->dev, 0); + + ret = pinctrl_pm_select_sleep_state(se->dev); + if (ret) + return ret; + + geni_se_clks_off(se); + + return geni_icc_disable(se); +} +EXPORT_SYMBOL_GPL(geni_se_resources_deactivate); + +/** + * geni_se_resources_activate() - Activate GENI SE device resources + * @se: Pointer to the geni_se structure + * + * Activates device resources for operation: enables interconnect, prepares clocks, + * and sets pin control to default state. Includes error cleanup. Skips ACPI devices. + * + * Return: 0 on success, negative error code on failure + */ +int geni_se_resources_activate(struct geni_se *se) +{ + int ret; + + if (has_acpi_companion(se->dev)) + return 0; + + ret = geni_icc_enable(se); + if (ret) + return ret; + + ret = geni_se_clks_on(se); + if (ret) + goto out_icc_disable; + + ret = pinctrl_pm_select_default_state(se->dev); + if (ret) { + geni_se_clks_off(se); + goto out_icc_disable; + } + + return ret; + +out_icc_disable: + geni_icc_disable(se); + return ret; +} +EXPORT_SYMBOL_GPL(geni_se_resources_activate); + /** * geni_se_resources_init() - Initialize resources for a GENI SE device. * @se: Pointer to the geni_se structure representing the GENI SE device. diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h index c182dd0f0bde..36a68149345c 100644 --- a/include/linux/soc/qcom/geni-se.h +++ b/include/linux/soc/qcom/geni-se.h @@ -541,6 +541,10 @@ int geni_icc_disable(struct geni_se *se); int geni_se_resources_init(struct geni_se *se); +int geni_se_resources_activate(struct geni_se *se); + +int geni_se_resources_deactivate(struct geni_se *se); + int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol); #endif #endif -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:14 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently handle the attachment of power domains. This often leads to duplicated code logic across different driver probe functions. Introduce a new helper API, geni_se_domain_attach(), to centralize the logic for attaching "power" and "perf" domains to the GENI SE device. Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3->v4 Konrad - Updated function documentation --- drivers/soc/qcom/qcom-geni-se.c | 29 +++++++++++++++++++++++++++++ include/linux/soc/qcom/geni-se.h | 4 ++++ 2 files changed, 33 insertions(+) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index 17ab5bbeb621..d80ae6c36582 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -19,6 +19,7 @@ #include <linux/of_platform.h> #include <linux/pinctrl/consumer.h> #include <linux/platform_device.h> +#include <linux/pm_domain.h> #include <linux/pm_opp.h> #include <linux/soc/qcom/geni-se.h> @@ -1092,6 +1093,34 @@ int geni_se_resources_activate(struct geni_se *se) } EXPORT_SYMBOL_GPL(geni_se_resources_activate); +/** + * geni_se_domain_attach() - Attach power domains to a GENI SE device. + * @se: Pointer to the geni_se structure representing the GENI SE device. + * + * This function attaches the power domains ("power" and "perf") required + * in the SCMI auto-VM environment to the GENI Serial Engine device. It + * initializes se->pd_list with the attached domains. + * + * Return: 0 on success, or a negative error code on failure. + */ +int geni_se_domain_attach(struct geni_se *se) +{ + struct dev_pm_domain_attach_data pd_data = { + .pd_flags = PD_FLAG_DEV_LINK_ON, + .pd_names = (const char*[]) { "power", "perf" }, + .num_pd_names = 2, + }; + int ret; + + ret = dev_pm_domain_attach_list(se->dev, + &pd_data, &se->pd_list); + if (ret <= 0) + return -EINVAL; + + return 0; +} +EXPORT_SYMBOL_GPL(geni_se_domain_attach); + /** * geni_se_resources_init() - Initialize resources for a GENI SE device. * @se: Pointer to the geni_se structure representing the GENI SE device. diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h index 36a68149345c..5f75159c5531 100644 --- a/include/linux/soc/qcom/geni-se.h +++ b/include/linux/soc/qcom/geni-se.h @@ -64,6 +64,7 @@ struct geni_icc_path { * @num_clk_levels: Number of valid clock levels in clk_perf_tbl * @clk_perf_tbl: Table of clock frequency input to serial engine clock * @icc_paths: Array of ICC paths for SE + * @pd_list: Power domain list for managing power domains * @has_opp: Indicates if OPP is supported */ struct geni_se { @@ -75,6 +76,7 @@ struct geni_se { unsigned int num_clk_levels; unsigned long *clk_perf_tbl; struct geni_icc_path icc_paths[3]; + struct dev_pm_domain_list *pd_list; bool has_opp; }; @@ -546,5 +548,7 @@ int geni_se_resources_activate(struct geni_se *se); int geni_se_resources_deactivate(struct geni_se *se); int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol); + +int geni_se_domain_attach(struct geni_se *se); #endif #endif -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:15 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
The GENI Serial Engine (SE) drivers (I2C, SPI, and SERIAL) currently manage performance levels and operating points directly. This resulting in code duplication across drivers. such as configuring a specific level or find and apply an OPP based on a clock frequency. Introduce two new helper APIs, geni_se_set_perf_level() and geni_se_set_perf_opp(), addresses this issue by providing a streamlined method for the GENI Serial Engine (SE) drivers to find and set the OPP based on the desired performance level, thereby eliminating redundancy. Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- drivers/soc/qcom/qcom-geni-se.c | 50 ++++++++++++++++++++++++++++++++ include/linux/soc/qcom/geni-se.h | 4 +++ 2 files changed, 54 insertions(+) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index d80ae6c36582..2241d1487031 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -282,6 +282,12 @@ struct se_fw_hdr { #define geni_setbits32(_addr, _v) writel(readl(_addr) | (_v), _addr) #define geni_clrbits32(_addr, _v) writel(readl(_addr) & ~(_v), _addr) +enum domain_idx { + DOMAIN_IDX_POWER, + DOMAIN_IDX_PERF, + DOMAIN_IDX_MAX +}; + /** * geni_se_get_qup_hw_version() - Read the QUP wrapper Hardware version * @se: Pointer to the corresponding serial engine. @@ -1093,6 +1099,50 @@ int geni_se_resources_activate(struct geni_se *se) } EXPORT_SYMBOL_GPL(geni_se_resources_activate); +/** + * geni_se_set_perf_level() - Set performance level for GENI SE. + * @se: Pointer to the struct geni_se instance. + * @level: The desired performance level. + * + * Sets the performance level by directly calling dev_pm_opp_set_level + * on the performance device associated with the SE. + * + * Return: 0 on success, or a negative error code on failure. + */ +int geni_se_set_perf_level(struct geni_se *se, unsigned long level) +{ + return dev_pm_opp_set_level(se->pd_list->pd_devs[DOMAIN_IDX_PERF], level); +} +EXPORT_SYMBOL_GPL(geni_se_set_perf_level); + +/** + * geni_se_set_perf_opp() - Set performance OPP for GENI SE by frequency. + * @se: Pointer to the struct geni_se instance. + * @clk_freq: The requested clock frequency. + * + * Finds the nearest operating performance point (OPP) for the given + * clock frequency and applies it to the SE's performance device. + * + * Return: 0 on success, or a negative error code on failure. + */ +int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq) +{ + struct device *perf_dev = se->pd_list->pd_devs[DOMAIN_IDX_PERF]; + struct dev_pm_opp *opp; + int ret; + + opp = dev_pm_opp_find_freq_floor(perf_dev, &clk_freq); + if (IS_ERR(opp)) { + dev_err(se->dev, "failed to find opp for freq %lu\n", clk_freq); + return PTR_ERR(opp); + } + + ret = dev_pm_opp_set_opp(perf_dev, opp); + dev_pm_opp_put(opp); + return ret; +} +EXPORT_SYMBOL_GPL(geni_se_set_perf_opp); + /** * geni_se_domain_attach() - Attach power domains to a GENI SE device. * @se: Pointer to the geni_se structure representing the GENI SE device. diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h index 5f75159c5531..c5e6ab85df09 100644 --- a/include/linux/soc/qcom/geni-se.h +++ b/include/linux/soc/qcom/geni-se.h @@ -550,5 +550,9 @@ int geni_se_resources_deactivate(struct geni_se *se); int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol); int geni_se_domain_attach(struct geni_se *se); + +int geni_se_set_perf_level(struct geni_se *se, unsigned long level); + +int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq); #endif #endif -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:16 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
Add DT bindings for the QUP GENI I2C controller on sa8255p platforms. SA8255p platform abstracts resources such as clocks, interconnect and GPIO pins configuration in Firmware. SCMI power and perf protocol are utilized to request resource configurations. SA8255p platform does not require the Serial Engine (SE) common properties as the SE firmware is loaded and managed by the TrustZone (TZ) secure environment. Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> Co-developed-by: Nikunj Kela <quic_nkela@quicinc.com> Signed-off-by: Nikunj Kela <quic_nkela@quicinc.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v2->v3: - Added Reviewed-by tag v1->v2: Krzysztof: - Added dma properties in example node - Removed minItems from power-domains property - Added in commit text about common property --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml diff --git a/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml new file mode 100644 index 000000000000..a61e40b5cbc1 --- /dev/null +++ b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml @@ -0,0 +1,64 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/i2c/qcom,sa8255p-geni-i2c.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Qualcomm SA8255p QUP GENI I2C Controller + +maintainers: + - Praveen Talari <praveen.talari@oss.qualcomm.com> + +properties: + compatible: + const: qcom,sa8255p-geni-i2c + + reg: + maxItems: 1 + + dmas: + maxItems: 2 + + dma-names: + items: + - const: tx + - const: rx + + interrupts: + maxItems: 1 + + power-domains: + maxItems: 2 + + power-domain-names: + items: + - const: power + - const: perf + +required: + - compatible + - reg + - interrupts + - power-domains + +allOf: + - $ref: /schemas/i2c/i2c-controller.yaml# + +unevaluatedProperties: false + +examples: + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + #include <dt-bindings/dma/qcom-gpi.h> + + i2c@a90000 { + compatible = "qcom,sa8255p-geni-i2c"; + reg = <0xa90000 0x4000>; + interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>; + dmas = <&gpi_dma0 0 0 QCOM_GPI_I2C>, + <&gpi_dma0 1 0 QCOM_GPI_I2C>; + dma-names = "tx", "rx"; + power-domains = <&scmi0_pd 0>, <&scmi0_dvfs 0>; + power-domain-names = "power", "perf"; + }; +... -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:17 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
Moving the serial engine setup to geni_i2c_init() API for a cleaner probe function and utilizes the PM runtime API to control resources instead of direct clock-related APIs for better resource management. Enables reusability of the serial engine initialization like hibernation and deep sleep features where hardware context is lost. Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3->v4: viken: - Added Acked-by tag - Removed extra space before invoke of geni_i2c_init(). v1->v2: Bjorn: - Updated commit text. --- drivers/i2c/busses/i2c-qcom-geni.c | 158 ++++++++++++++--------------- 1 file changed, 75 insertions(+), 83 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index ae609bdd2ec4..81ed1596ac9f 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -977,10 +977,77 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c) return ret; } +static int geni_i2c_init(struct geni_i2c_dev *gi2c) +{ + const struct geni_i2c_desc *desc = NULL; + u32 proto, tx_depth; + bool fifo_disable; + int ret; + + ret = pm_runtime_resume_and_get(gi2c->se.dev); + if (ret < 0) { + dev_err(gi2c->se.dev, "error turning on device :%d\n", ret); + return ret; + } + + proto = geni_se_read_proto(&gi2c->se); + if (proto == GENI_SE_INVALID_PROTO) { + ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C); + if (ret) { + dev_err_probe(gi2c->se.dev, ret, "i2c firmware load failed ret: %d\n", ret); + goto err; + } + } else if (proto != GENI_SE_I2C) { + ret = dev_err_probe(gi2c->se.dev, -ENXIO, "Invalid proto %d\n", proto); + goto err; + } + + desc = device_get_match_data(gi2c->se.dev); + if (desc && desc->no_dma_support) { + fifo_disable = false; + gi2c->no_dma = true; + } else { + fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; + } + + if (fifo_disable) { + /* FIFO is disabled, so we can only use GPI DMA */ + gi2c->gpi_mode = true; + ret = setup_gpi_dma(gi2c); + if (ret) + goto err; + + dev_dbg(gi2c->se.dev, "Using GPI DMA mode for I2C\n"); + } else { + gi2c->gpi_mode = false; + tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se); + + /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */ + if (!tx_depth && desc) + tx_depth = desc->tx_fifo_depth; + + if (!tx_depth) { + ret = dev_err_probe(gi2c->se.dev, -EINVAL, + "Invalid TX FIFO depth\n"); + goto err; + } + + gi2c->tx_wm = tx_depth - 1; + geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth); + geni_se_config_packing(&gi2c->se, BITS_PER_BYTE, + PACKING_BYTES_PW, true, true, true); + + dev_dbg(gi2c->se.dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); + } + +err: + pm_runtime_put(gi2c->se.dev); + return ret; +} + static int geni_i2c_probe(struct platform_device *pdev) { struct geni_i2c_dev *gi2c; - u32 proto, tx_depth, fifo_disable; int ret; struct device *dev = &pdev->dev; const struct geni_i2c_desc *desc = NULL; @@ -1060,102 +1127,27 @@ static int geni_i2c_probe(struct platform_device *pdev) if (ret) return ret; - ret = clk_prepare_enable(gi2c->core_clk); - if (ret) - return ret; - - ret = geni_se_resources_on(&gi2c->se); - if (ret) { - dev_err_probe(dev, ret, "Error turning on resources\n"); - goto err_clk; - } - proto = geni_se_read_proto(&gi2c->se); - if (proto == GENI_SE_INVALID_PROTO) { - ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C); - if (ret) { - dev_err_probe(dev, ret, "i2c firmware load failed ret: %d\n", ret); - goto err_resources; - } - } else if (proto != GENI_SE_I2C) { - ret = dev_err_probe(dev, -ENXIO, "Invalid proto %d\n", proto); - goto err_resources; - } - - if (desc && desc->no_dma_support) { - fifo_disable = false; - gi2c->no_dma = true; - } else { - fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; - } - - if (fifo_disable) { - /* FIFO is disabled, so we can only use GPI DMA */ - gi2c->gpi_mode = true; - ret = setup_gpi_dma(gi2c); - if (ret) - goto err_resources; - - dev_dbg(dev, "Using GPI DMA mode for I2C\n"); - } else { - gi2c->gpi_mode = false; - tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se); - - /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */ - if (!tx_depth && desc) - tx_depth = desc->tx_fifo_depth; - - if (!tx_depth) { - ret = dev_err_probe(dev, -EINVAL, - "Invalid TX FIFO depth\n"); - goto err_resources; - } - - gi2c->tx_wm = tx_depth - 1; - geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth); - geni_se_config_packing(&gi2c->se, BITS_PER_BYTE, - PACKING_BYTES_PW, true, true, true); - - dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); - } - - clk_disable_unprepare(gi2c->core_clk); - ret = geni_se_resources_off(&gi2c->se); - if (ret) { - dev_err_probe(dev, ret, "Error turning off resources\n"); - goto err_dma; - } - - ret = geni_icc_disable(&gi2c->se); - if (ret) - goto err_dma; - gi2c->suspended = 1; pm_runtime_set_suspended(gi2c->se.dev); pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY); pm_runtime_use_autosuspend(gi2c->se.dev); pm_runtime_enable(gi2c->se.dev); + ret = geni_i2c_init(gi2c); + if (ret < 0) { + pm_runtime_disable(gi2c->se.dev); + return ret; + } + ret = i2c_add_adapter(&gi2c->adap); if (ret) { dev_err_probe(dev, ret, "Error adding i2c adapter\n"); pm_runtime_disable(gi2c->se.dev); - goto err_dma; + return ret; } dev_dbg(dev, "Geni-I2C adaptor successfully added\n"); - return ret; - -err_resources: - geni_se_resources_off(&gi2c->se); -err_clk: - clk_disable_unprepare(gi2c->core_clk); - - return ret; - -err_dma: - release_gpi_dma(gi2c); - return ret; } -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:18 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
Refactor the resource initialization in geni_i2c_probe() by introducing a new geni_i2c_resources_init() function and utilizing the common geni_se_resources_init() framework and clock frequency mapping, making the probe function cleaner. Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3->v4: - Added Acked-by tag. v1->v2: - Updated commit text. --- drivers/i2c/busses/i2c-qcom-geni.c | 53 ++++++++++++------------------ 1 file changed, 21 insertions(+), 32 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index 81ed1596ac9f..56eebefda75f 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -1045,6 +1045,23 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c) return ret; } +static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c) +{ + int ret; + + ret = geni_se_resources_init(&gi2c->se); + if (ret) + return ret; + + ret = geni_i2c_clk_map_idx(gi2c); + if (ret) + return dev_err_probe(gi2c->se.dev, ret, "Invalid clk frequency %d Hz\n", + gi2c->clk_freq_out); + + return geni_icc_set_bw_ab(&gi2c->se, GENI_DEFAULT_BW, GENI_DEFAULT_BW, + Bps_to_icc(gi2c->clk_freq_out)); +} + static int geni_i2c_probe(struct platform_device *pdev) { struct geni_i2c_dev *gi2c; @@ -1064,16 +1081,6 @@ static int geni_i2c_probe(struct platform_device *pdev) desc = device_get_match_data(&pdev->dev); - if (desc && desc->has_core_clk) { - gi2c->core_clk = devm_clk_get(dev, "core"); - if (IS_ERR(gi2c->core_clk)) - return PTR_ERR(gi2c->core_clk); - } - - gi2c->se.clk = devm_clk_get(dev, "se"); - if (IS_ERR(gi2c->se.clk) && !has_acpi_companion(dev)) - return PTR_ERR(gi2c->se.clk); - ret = device_property_read_u32(dev, "clock-frequency", &gi2c->clk_freq_out); if (ret) { @@ -1088,16 +1095,15 @@ static int geni_i2c_probe(struct platform_device *pdev) if (gi2c->irq < 0) return gi2c->irq; - ret = geni_i2c_clk_map_idx(gi2c); - if (ret) - return dev_err_probe(dev, ret, "Invalid clk frequency %d Hz\n", - gi2c->clk_freq_out); - gi2c->adap.algo = &geni_i2c_algo; init_completion(&gi2c->done); spin_lock_init(&gi2c->lock); platform_set_drvdata(pdev, gi2c); + ret = geni_i2c_resources_init(gi2c); + if (ret) + return ret; + /* Keep interrupts disabled initially to allow for low-power modes */ ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN, dev_name(dev), gi2c); @@ -1110,23 +1116,6 @@ static int geni_i2c_probe(struct platform_device *pdev) gi2c->adap.dev.of_node = dev->of_node; strscpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name)); - ret = geni_icc_get(&gi2c->se, desc ? desc->icc_ddr : "qup-memory"); - if (ret) - return ret; - /* - * Set the bus quota for core and cpu to a reasonable value for - * register access. - * Set quota for DDR based on bus speed. - */ - gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW; - gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW; - if (!desc || desc->icc_ddr) - gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out); - - ret = geni_icc_set_bw(&gi2c->se); - if (ret) - return ret; - gi2c->suspended = 1; pm_runtime_set_suspended(gi2c->se.dev); pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY); -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:19 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
To manage GENI serial engine resources during runtime power management, drivers currently need to call functions for ICC, clock, and SE resource operations in both suspend and resume paths, resulting in code duplication across drivers. The new geni_se_resources_activate() and geni_se_resources_deactivate() helper APIs addresses this issue by providing a streamlined method to enable or disable all resources based, thereby eliminating redundancy across drivers. Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3->v4: - Added Acked-by tag. v1->v2: Bjorn: - Remove geni_se_resources_state() API. - Used geni_se_resources_activate() and geni_se_resources_deactivate() to enable/disable resources. --- drivers/i2c/busses/i2c-qcom-geni.c | 28 +++++----------------------- 1 file changed, 5 insertions(+), 23 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index 56eebefda75f..4ff84bb0fff5 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -1163,18 +1163,15 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev) struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); disable_irq(gi2c->irq); - ret = geni_se_resources_off(&gi2c->se); + + ret = geni_se_resources_deactivate(&gi2c->se); if (ret) { enable_irq(gi2c->irq); return ret; - - } else { - gi2c->suspended = 1; } - clk_disable_unprepare(gi2c->core_clk); - - return geni_icc_disable(&gi2c->se); + gi2c->suspended = 1; + return ret; } static int __maybe_unused geni_i2c_runtime_resume(struct device *dev) @@ -1182,28 +1179,13 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev) int ret; struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); - ret = geni_icc_enable(&gi2c->se); + ret = geni_se_resources_activate(&gi2c->se); if (ret) return ret; - ret = clk_prepare_enable(gi2c->core_clk); - if (ret) - goto out_icc_disable; - - ret = geni_se_resources_on(&gi2c->se); - if (ret) - goto out_clk_disable; - enable_irq(gi2c->irq); gi2c->suspended = 0; - return 0; - -out_clk_disable: - clk_disable_unprepare(gi2c->core_clk); -out_icc_disable: - geni_icc_disable(&gi2c->se); - return ret; } -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:20 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
To avoid repeatedly fetching and checking platform data across various functions, store the struct of_device_id data directly in the i2c private structure. This change enhances code maintainability and reduces redundancy. Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3->v4 - Added Acked-by tag. Konrad - Removed icc_ddr from platfrom data struct --- drivers/i2c/busses/i2c-qcom-geni.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index 4ff84bb0fff5..8fd62d659c2a 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -77,6 +77,12 @@ enum geni_i2c_err_code { #define XFER_TIMEOUT HZ #define RST_TIMEOUT HZ +struct geni_i2c_desc { + bool has_core_clk; + bool no_dma_support; + unsigned int tx_fifo_depth; +}; + #define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2 /** @@ -122,13 +128,7 @@ struct geni_i2c_dev { bool is_tx_multi_desc_xfer; u32 num_msgs; struct geni_i2c_gpi_multi_desc_xfer i2c_multi_desc_config; -}; - -struct geni_i2c_desc { - bool has_core_clk; - char *icc_ddr; - bool no_dma_support; - unsigned int tx_fifo_depth; + const struct geni_i2c_desc *dev_data; }; struct geni_i2c_err_log { @@ -979,7 +979,6 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c) static int geni_i2c_init(struct geni_i2c_dev *gi2c) { - const struct geni_i2c_desc *desc = NULL; u32 proto, tx_depth; bool fifo_disable; int ret; @@ -1002,8 +1001,7 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c) goto err; } - desc = device_get_match_data(gi2c->se.dev); - if (desc && desc->no_dma_support) { + if (gi2c->dev_data->no_dma_support) { fifo_disable = false; gi2c->no_dma = true; } else { @@ -1023,8 +1021,8 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c) tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se); /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */ - if (!tx_depth && desc) - tx_depth = desc->tx_fifo_depth; + if (!tx_depth && gi2c->dev_data->has_core_clk) + tx_depth = gi2c->dev_data->tx_fifo_depth; if (!tx_depth) { ret = dev_err_probe(gi2c->se.dev, -EINVAL, @@ -1067,7 +1065,6 @@ static int geni_i2c_probe(struct platform_device *pdev) struct geni_i2c_dev *gi2c; int ret; struct device *dev = &pdev->dev; - const struct geni_i2c_desc *desc = NULL; gi2c = devm_kzalloc(dev, sizeof(*gi2c), GFP_KERNEL); if (!gi2c) @@ -1079,7 +1076,7 @@ static int geni_i2c_probe(struct platform_device *pdev) if (IS_ERR(gi2c->se.base)) return PTR_ERR(gi2c->se.base); - desc = device_get_match_data(&pdev->dev); + gi2c->dev_data = device_get_match_data(&pdev->dev); ret = device_property_read_u32(dev, "clock-frequency", &gi2c->clk_freq_out); @@ -1218,15 +1215,16 @@ static const struct dev_pm_ops geni_i2c_pm_ops = { NULL) }; +static const struct geni_i2c_desc geni_i2c = {}; + static const struct geni_i2c_desc i2c_master_hub = { .has_core_clk = true, - .icc_ddr = NULL, .no_dma_support = true, .tx_fifo_depth = 16, }; static const struct of_device_id geni_i2c_dt_match[] = { - { .compatible = "qcom,geni-i2c" }, + { .compatible = "qcom,geni-i2c", .data = &geni_i2c }, { .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub }, {} }; -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:21 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power states(on/off). The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Praveen Talari (13): soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC path optional soc: qcom: geni-se: Add geni_icc_set_bw_ab() function soc: qcom: geni-se: Introduce helper API for resource initialization soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and geni_se_clks_on() soc: qcom: geni-se: Add resources activation/deactivation helpers soc: qcom: geni-se: Introduce helper API for attaching power domains soc: qcom: geni-se: Introduce helper APIs for performance control dt-bindings: i2c: Describe SA8255p i2c: qcom-geni: Isolate serial engine setup i2c: qcom-geni: Move resource initialization to separate function i2c: qcom-geni: Use resources helper APIs in runtime PM functions i2c: qcom-geni: Store of_device_id data in driver private struct i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms --- v3->v4 - Added a new patch(4/13) to handle core clk as part of geni_se_clks_off/on(). --- .../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++ drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++--------- drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++-- include/linux/soc/qcom/geni-se.h | 19 ++ 4 files changed, 476 insertions(+), 175 deletions(-) create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d -- 2.34.1
The Qualcomm automotive SA8255p SoC relies on firmware to configure platform resources, including clocks, interconnects and TLMM. The driver requests resources operations over SCMI using power and performance protocols. The SCMI power protocol enables or disables resources like clocks, interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs, such as resume/suspend, to control power on/off. The SCMI performance protocol manages I2C frequency, with each frequency rate represented by a performance level. The driver uses geni_se_set_perf_opp() API to request the desired frequency rate.. As part of geni_se_set_perf_opp(), the OPP for the requested frequency is obtained using dev_pm_opp_find_freq_floor() and the performance level is set using dev_pm_opp_set_opp(). Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com> Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com> --- v3->v4: - Added Acked-by tag. V1->v2: - Initialized ret to "0" in resume/suspend callbacks. Bjorn: - Used seperate APIs for the resouces enable/disable. --- drivers/i2c/busses/i2c-qcom-geni.c | 56 ++++++++++++++++++++++-------- 1 file changed, 42 insertions(+), 14 deletions(-) diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c index 8fd62d659c2a..2ad31e412b96 100644 --- a/drivers/i2c/busses/i2c-qcom-geni.c +++ b/drivers/i2c/busses/i2c-qcom-geni.c @@ -81,6 +81,10 @@ struct geni_i2c_desc { bool has_core_clk; bool no_dma_support; unsigned int tx_fifo_depth; + int (*resources_init)(struct geni_se *se); + int (*set_rate)(struct geni_se *se, unsigned long freq); + int (*power_on)(struct geni_se *se); + int (*power_off)(struct geni_se *se); }; #define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2 @@ -203,8 +207,9 @@ static int geni_i2c_clk_map_idx(struct geni_i2c_dev *gi2c) return -EINVAL; } -static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c) +static int qcom_geni_i2c_conf(struct geni_se *se, unsigned long freq) { + struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev); const struct geni_i2c_clk_fld *itr = gi2c->clk_fld; u32 val; @@ -217,6 +222,7 @@ static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c) val |= itr->t_low_cnt << LOW_COUNTER_SHFT; val |= itr->t_cycle_cnt; writel_relaxed(val, gi2c->se.base + SE_I2C_SCL_COUNTERS); + return 0; } static void geni_i2c_err_misc(struct geni_i2c_dev *gi2c) @@ -908,7 +914,9 @@ static int geni_i2c_xfer(struct i2c_adapter *adap, return ret; } - qcom_geni_i2c_conf(gi2c); + ret = gi2c->dev_data->set_rate(&gi2c->se, gi2c->clk_freq_out); + if (ret) + return ret; if (gi2c->gpi_mode) ret = geni_i2c_gpi_xfer(gi2c, msgs, num); @@ -1043,8 +1051,9 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c) return ret; } -static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c) +static int geni_i2c_resources_init(struct geni_se *se) { + struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev); int ret; ret = geni_se_resources_init(&gi2c->se); @@ -1097,7 +1106,7 @@ static int geni_i2c_probe(struct platform_device *pdev) spin_lock_init(&gi2c->lock); platform_set_drvdata(pdev, gi2c); - ret = geni_i2c_resources_init(gi2c); + ret = gi2c->dev_data->resources_init(&gi2c->se); if (ret) return ret; @@ -1156,15 +1165,17 @@ static void geni_i2c_shutdown(struct platform_device *pdev) static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev) { - int ret; + int ret = 0; struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); disable_irq(gi2c->irq); - ret = geni_se_resources_deactivate(&gi2c->se); - if (ret) { - enable_irq(gi2c->irq); - return ret; + if (gi2c->dev_data->power_off) { + ret = gi2c->dev_data->power_off(&gi2c->se); + if (ret) { + enable_irq(gi2c->irq); + return ret; + } } gi2c->suspended = 1; @@ -1173,12 +1184,14 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev) static int __maybe_unused geni_i2c_runtime_resume(struct device *dev) { - int ret; + int ret = 0; struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); - ret = geni_se_resources_activate(&gi2c->se); - if (ret) - return ret; + if (gi2c->dev_data->power_on) { + ret = gi2c->dev_data->power_on(&gi2c->se); + if (ret) + return ret; + } enable_irq(gi2c->irq); gi2c->suspended = 0; @@ -1215,17 +1228,32 @@ static const struct dev_pm_ops geni_i2c_pm_ops = { NULL) }; -static const struct geni_i2c_desc geni_i2c = {}; +static const struct geni_i2c_desc geni_i2c = { + .resources_init = geni_i2c_resources_init, + .set_rate = qcom_geni_i2c_conf, + .power_on = geni_se_resources_activate, + .power_off = geni_se_resources_deactivate, +}; static const struct geni_i2c_desc i2c_master_hub = { .has_core_clk = true, .no_dma_support = true, .tx_fifo_depth = 16, + .resources_init = geni_i2c_resources_init, + .set_rate = qcom_geni_i2c_conf, + .power_on = geni_se_resources_activate, + .power_off = geni_se_resources_deactivate, +}; + +static const struct geni_i2c_desc sa8255p_geni_i2c = { + .resources_init = geni_se_domain_attach, + .set_rate = geni_se_set_perf_opp, }; static const struct of_device_id geni_i2c_dt_match[] = { { .compatible = "qcom,geni-i2c", .data = &geni_i2c }, { .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub }, + { .compatible = "qcom,sa8255p-geni-i2c", .data = &sa8255p_geni_i2c }, {} }; MODULE_DEVICE_TABLE(of, geni_i2c_dt_match); -- 2.34.1
{ "author": "Praveen Talari <praveen.talari@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 23:39:22 +0530", "thread_id": "20260202180922.1692428-12-praveen.talari@oss.qualcomm.com.mbox.gz" }
lkml
[PATCH] staging: sm750fb: rename Bpp to bpp
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..866b12c2a 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bpp, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bpp: Color depth of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bpp, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
On Mon, Feb 02, 2026 at 04:54:13PM +0200, yehudis9982 wrote: What does "bpp" stand for? Perhaps spell it out further? thanks, greg k-h
{ "author": "Greg KH <gregkh@linuxfoundation.org>", "date": "Mon, 2 Feb 2026 16:01:17 +0100", "thread_id": "20260202171243.133935-1-y0533159982@gmail.com.mbox.gz" }
lkml
[PATCH] staging: sm750fb: rename Bpp to bpp
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..866b12c2a 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bpp, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bpp: Color depth of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bpp, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
Rename the Bpp parameter to bytes_per_pixel for clarity and to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..3fe9429e1 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -48,7 +48,7 @@ void sm750_hw_de_init(struct lynx_accel *accel) DE_STRETCH_FORMAT_ADDRESSING_MASK | DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK; - /* DE_STRETCH bpp format need be initialized in setMode routine */ + /* DE_STRETCH bytes_per_pixel format need be initialized in setMode routine */ write_dpr(accel, DE_STRETCH_FORMAT, (read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg); @@ -76,7 +76,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) { u32 reg; - /* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */ + /* fmt=0,1,2 for 8,16,32,bytes_per_pixel on sm718/750/502 */ reg = read_dpr(accel, DE_STRETCH_FORMAT); reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK; reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) & @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bytes_per_pixel, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bytes_per_pixel: Bytes per pixel (color depth / 8) of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bytes_per_pixel, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
{ "author": "yehudis9982 <y0533159982@gmail.com>", "date": "Mon, 2 Feb 2026 18:46:45 +0200", "thread_id": "20260202171243.133935-1-y0533159982@gmail.com.mbox.gz" }
lkml
[PATCH] staging: sm750fb: rename Bpp to bpp
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..866b12c2a 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bpp, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bpp: Color depth of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bpp, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
Rename the Bpp parameter to bytes_per_pixel for clarity and to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..3fe9429e1 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -48,7 +48,7 @@ void sm750_hw_de_init(struct lynx_accel *accel) DE_STRETCH_FORMAT_ADDRESSING_MASK | DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK; - /* DE_STRETCH bpp format need be initialized in setMode routine */ + /* DE_STRETCH bytes_per_pixel format need be initialized in setMode routine */ write_dpr(accel, DE_STRETCH_FORMAT, (read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg); @@ -76,7 +76,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) { u32 reg; - /* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */ + /* fmt=0,1,2 for 8,16,32,bytes_per_pixel on sm718/750/502 */ reg = read_dpr(accel, DE_STRETCH_FORMAT); reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK; reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) & @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bytes_per_pixel, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bytes_per_pixel: Bytes per pixel (color depth / 8) of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bytes_per_pixel, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
{ "author": "yehudis9982 <y0533159982@gmail.com>", "date": "Mon, 2 Feb 2026 18:57:18 +0200", "thread_id": "20260202171243.133935-1-y0533159982@gmail.com.mbox.gz" }
lkml
[PATCH] staging: sm750fb: rename Bpp to bpp
Rename the Bpp parameter to bpp to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..866b12c2a 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bpp, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bpp: Color depth of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bpp, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bpp << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bpp << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
Rename the Bpp parameter to bytes_per_pixel for clarity and to avoid CamelCase, as reported by checkpatch.pl. Signed-off-by: yehudis9982 <y0533159982@gmail.com> --- drivers/staging/sm750fb/sm750_accel.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c index 046b9282b..3fe9429e1 100644 --- a/drivers/staging/sm750fb/sm750_accel.c +++ b/drivers/staging/sm750fb/sm750_accel.c @@ -48,7 +48,7 @@ void sm750_hw_de_init(struct lynx_accel *accel) DE_STRETCH_FORMAT_ADDRESSING_MASK | DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK; - /* DE_STRETCH bpp format need be initialized in setMode routine */ + /* DE_STRETCH bytes_per_pixel format need be initialized in setMode routine */ write_dpr(accel, DE_STRETCH_FORMAT, (read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg); @@ -76,7 +76,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) { u32 reg; - /* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */ + /* fmt=0,1,2 for 8,16,32,bytes_per_pixel on sm718/750/502 */ reg = read_dpr(accel, DE_STRETCH_FORMAT); reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK; reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) & @@ -85,7 +85,7 @@ void sm750_hw_set2dformat(struct lynx_accel *accel, int fmt) } int sm750_hw_fillrect(struct lynx_accel *accel, - u32 base, u32 pitch, u32 Bpp, + u32 base, u32 pitch, u32 bytes_per_pixel, u32 x, u32 y, u32 width, u32 height, u32 color, u32 rop) { @@ -102,14 +102,14 @@ int sm750_hw_fillrect(struct lynx_accel *accel, write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */ write_dpr(accel, DE_PITCH, - ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((pitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (pitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */ write_dpr(accel, DE_WINDOW_WIDTH, - ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((pitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ + (pitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */ write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */ @@ -138,7 +138,7 @@ int sm750_hw_fillrect(struct lynx_accel *accel, * @sy: Starting y coordinate of source surface * @dBase: Address of destination: offset in frame buffer * @dPitch: Pitch value of destination surface in BYTE - * @Bpp: Color depth of destination surface + * @bytes_per_pixel: Bytes per pixel (color depth / 8) of destination surface * @dx: Starting x coordinate of destination surface * @dy: Starting y coordinate of destination surface * @width: width of rectangle in pixel value @@ -149,7 +149,7 @@ int sm750_hw_copyarea(struct lynx_accel *accel, unsigned int sBase, unsigned int sPitch, unsigned int sx, unsigned int sy, unsigned int dBase, unsigned int dPitch, - unsigned int Bpp, unsigned int dx, unsigned int dy, + unsigned int bytes_per_pixel, unsigned int dx, unsigned int dy, unsigned int width, unsigned int height, unsigned int rop2) { @@ -249,9 +249,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * pixel values. Need Byte to pixel conversion. */ write_dpr(accel, DE_PITCH, - ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) & + ((dPitch / bytes_per_pixel << DE_PITCH_DESTINATION_SHIFT) & DE_PITCH_DESTINATION_MASK) | - (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */ + (sPitch / bytes_per_pixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */ /* * Screen Window width in Pixels. @@ -259,9 +259,9 @@ int sm750_hw_copyarea(struct lynx_accel *accel, * for a given point. */ write_dpr(accel, DE_WINDOW_WIDTH, - ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) & + ((dPitch / bytes_per_pixel << DE_WINDOW_WIDTH_DST_SHIFT) & DE_WINDOW_WIDTH_DST_MASK) | - (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ + (sPitch / bytes_per_pixel & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */ if (accel->de_wait() != 0) return -1; -- 2.43.0
{ "author": "yehudis9982 <y0533159982@gmail.com>", "date": "Mon, 2 Feb 2026 19:12:43 +0200", "thread_id": "20260202171243.133935-1-y0533159982@gmail.com.mbox.gz" }
lkml
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
x86/mm/pat should be using ptdescs. One line has already been converted to pagetable_free(), while the allocation sites use get_free_pages(). This causes issues separately allocating ptdescs from struct page. These patches convert the allocation/free sites to use ptdescs. In the short term, this helps enable Matthew's work to allocate frozen pagetables[1]. And in the long term, this will help us cleanly split ptdesc allocations from struct page. The pgd_list should also be using ptdescs (for 32bit in this file). This can be done in a different patchset since there's other users of pgd_list that still need to be converted. [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/ [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/ ------ I've also tested this on a tree that separately allocates ptdescs. That didn't find any lingering alloc/free issues. Based on current mm-new. v3: - Move comment regarding 32-bit conversions into the cover letter - Correct the handling for the pagetable_alloc() error path Vishal Moola (Oracle) (3): x86/mm/pat: Convert pte code to use ptdescs x86/mm/pat: Convert pmd code to use ptdescs x86/mm/pat: Convert split_large_page() to use ptdescs arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) -- 2.52.0
In order to separately allocate ptdescs from pages, we need all allocation and free sites to use the appropriate functions. Convert these pte allocation/free sites to use ptdescs. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- arch/x86/mm/pat/set_memory.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 6c6eb486f7a6..f9f9d4ca8e71 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte) if (!pte_none(pte[i])) return false; - free_page((unsigned long)pte); + pagetable_free(virt_to_ptdesc((void *)pte)); return true; } @@ -1537,12 +1537,15 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end) */ } -static int alloc_pte_page(pmd_t *pmd) +static int alloc_pte_ptdesc(pmd_t *pmd) { - pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL); - if (!pte) + pte_t *pte; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return -1; + pte = (pte_t *) ptdesc_address(ptdesc); set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE)); return 0; } @@ -1600,7 +1603,7 @@ static long populate_pmd(struct cpa_data *cpa, */ pmd = pmd_offset(pud, start); if (pmd_none(*pmd)) - if (alloc_pte_page(pmd)) + if (alloc_pte_ptdesc(pmd)) return -1; populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot); @@ -1641,7 +1644,7 @@ static long populate_pmd(struct cpa_data *cpa, if (start < end) { pmd = pmd_offset(pud, start); if (pmd_none(*pmd)) - if (alloc_pte_page(pmd)) + if (alloc_pte_ptdesc(pmd)) return -1; populate_pte(cpa, start, end, num_pages - cur_pages, -- 2.52.0
{ "author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>", "date": "Mon, 2 Feb 2026 09:20:03 -0800", "thread_id": "20260202172005.683870-2-vishal.moola@gmail.com.mbox.gz" }
lkml
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
x86/mm/pat should be using ptdescs. One line has already been converted to pagetable_free(), while the allocation sites use get_free_pages(). This causes issues separately allocating ptdescs from struct page. These patches convert the allocation/free sites to use ptdescs. In the short term, this helps enable Matthew's work to allocate frozen pagetables[1]. And in the long term, this will help us cleanly split ptdesc allocations from struct page. The pgd_list should also be using ptdescs (for 32bit in this file). This can be done in a different patchset since there's other users of pgd_list that still need to be converted. [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/ [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/ ------ I've also tested this on a tree that separately allocates ptdescs. That didn't find any lingering alloc/free issues. Based on current mm-new. v3: - Move comment regarding 32-bit conversions into the cover letter - Correct the handling for the pagetable_alloc() error path Vishal Moola (Oracle) (3): x86/mm/pat: Convert pte code to use ptdescs x86/mm/pat: Convert pmd code to use ptdescs x86/mm/pat: Convert split_large_page() to use ptdescs arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) -- 2.52.0
In order to separately allocate ptdescs from pages, we need all allocation and free sites to use the appropriate functions. split_large_page() allocates a page to be used as a page table. This should be allocating a ptdesc, so convert it. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- arch/x86/mm/pat/set_memory.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 9f531c87531b..52226679d079 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1119,9 +1119,10 @@ static void split_set_pte(struct cpa_data *cpa, pte_t *pte, unsigned long pfn, static int __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, - struct page *base) + struct ptdesc *ptdesc) { unsigned long lpaddr, lpinc, ref_pfn, pfn, pfninc = 1; + struct page *base = ptdesc_page(ptdesc); pte_t *pbase = (pte_t *)page_address(base); unsigned int i, level; pgprot_t ref_prot; @@ -1226,18 +1227,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, static int split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address) { - struct page *base; + struct ptdesc *ptdesc; if (!debug_pagealloc_enabled()) spin_unlock(&cpa_lock); - base = alloc_pages(GFP_KERNEL, 0); + ptdesc = pagetable_alloc(GFP_KERNEL, 0); if (!debug_pagealloc_enabled()) spin_lock(&cpa_lock); - if (!base) + if (!ptdesc) return -ENOMEM; - if (__split_large_page(cpa, kpte, address, base)) - __free_page(base); + if (__split_large_page(cpa, kpte, address, ptdesc)) + pagetable_free(ptdesc); return 0; } -- 2.52.0
{ "author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>", "date": "Mon, 2 Feb 2026 09:20:05 -0800", "thread_id": "20260202172005.683870-2-vishal.moola@gmail.com.mbox.gz" }
lkml
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
x86/mm/pat should be using ptdescs. One line has already been converted to pagetable_free(), while the allocation sites use get_free_pages(). This causes issues separately allocating ptdescs from struct page. These patches convert the allocation/free sites to use ptdescs. In the short term, this helps enable Matthew's work to allocate frozen pagetables[1]. And in the long term, this will help us cleanly split ptdesc allocations from struct page. The pgd_list should also be using ptdescs (for 32bit in this file). This can be done in a different patchset since there's other users of pgd_list that still need to be converted. [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/ [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/ ------ I've also tested this on a tree that separately allocates ptdescs. That didn't find any lingering alloc/free issues. Based on current mm-new. v3: - Move comment regarding 32-bit conversions into the cover letter - Correct the handling for the pagetable_alloc() error path Vishal Moola (Oracle) (3): x86/mm/pat: Convert pte code to use ptdescs x86/mm/pat: Convert pmd code to use ptdescs x86/mm/pat: Convert split_large_page() to use ptdescs arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) -- 2.52.0
In order to separately allocate ptdescs from pages, we need all allocation and free sites to use the appropriate functions. Convert these pmd allocation/free sites to use ptdescs. populate_pgd() also allocates pagetables that may later be freed by try_to_free_pmd_page(), so allocate ptdescs there as well. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- arch/x86/mm/pat/set_memory.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index f9f9d4ca8e71..9f531c87531b 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1420,7 +1420,7 @@ static bool try_to_free_pmd_page(pmd_t *pmd) if (!pmd_none(pmd[i])) return false; - free_page((unsigned long)pmd); + pagetable_free(virt_to_ptdesc((void *)pmd)); return true; } @@ -1550,12 +1550,15 @@ static int alloc_pte_ptdesc(pmd_t *pmd) return 0; } -static int alloc_pmd_page(pud_t *pud) +static int alloc_pmd_ptdesc(pud_t *pud) { - pmd_t *pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL); - if (!pmd) + pmd_t *pmd; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return -1; + pmd = (pmd_t *) ptdesc_address(ptdesc); set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); return 0; } @@ -1625,7 +1628,7 @@ static long populate_pmd(struct cpa_data *cpa, * We cannot use a 1G page so allocate a PMD page if needed. */ if (pud_none(*pud)) - if (alloc_pmd_page(pud)) + if (alloc_pmd_ptdesc(pud)) return -1; pmd = pmd_offset(pud, start); @@ -1681,7 +1684,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d, * Need a PMD page? */ if (pud_none(*pud)) - if (alloc_pmd_page(pud)) + if (alloc_pmd_ptdesc(pud)) return -1; cur_pages = populate_pmd(cpa, start, pre_end, cur_pages, @@ -1718,7 +1721,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d, pud = pud_offset(p4d, start); if (pud_none(*pud)) - if (alloc_pmd_page(pud)) + if (alloc_pmd_ptdesc(pud)) return -1; tmp = populate_pmd(cpa, start, end, cpa->numpages - cur_pages, @@ -1742,14 +1745,16 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) p4d_t *p4d; pgd_t *pgd_entry; long ret; + struct ptdesc *ptdesc; pgd_entry = cpa->pgd + pgd_index(addr); if (pgd_none(*pgd_entry)) { - p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL); - if (!p4d) + ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + if (!ptdesc) return -1; + p4d = (p4d_t *) ptdesc_address(ptdesc); set_pgd(pgd_entry, __pgd(__pa(p4d) | _KERNPG_TABLE)); } @@ -1758,10 +1763,11 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) */ p4d = p4d_offset(pgd_entry, addr); if (p4d_none(*p4d)) { - pud = (pud_t *)get_zeroed_page(GFP_KERNEL); - if (!pud) + ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + if (!ptdesc) return -1; + pud = (pud_t *) ptdesc_address(ptdesc); set_p4d(p4d, __p4d(__pa(pud) | _KERNPG_TABLE)); } -- 2.52.0
{ "author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>", "date": "Mon, 2 Feb 2026 09:20:04 -0800", "thread_id": "20260202172005.683870-2-vishal.moola@gmail.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Extend the DPLL core to support associating a DPLL pin with a firmware node. This association is required to allow other subsystems (such as network drivers) to locate and request specific DPLL pins defined in the Device Tree or ACPI. * Add a .fwnode field to the struct dpll_pin * Introduce dpll_pin_fwnode_set() helper to allow the provider driver to associate a pin with a fwnode after the pin has been allocated * Introduce fwnode_dpll_pin_find() helper to allow consumers to search for a registered DPLL pin using its associated fwnode handle * Ensure the fwnode reference is properly released in dpll_pin_put() Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * fixed fwnode_dpll_pin_find() return value description --- drivers/dpll/dpll_core.c | 49 ++++++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 2 ++ include/linux/dpll.h | 11 +++++++++ 3 files changed, 62 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 8879a72351561..f04ed7195cadd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -595,12 +596,60 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->parent_refs); xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); +/** + * dpll_pin_fwnode_set - set dpll pin firmware node reference + * @pin: pointer to a dpll pin + * @fwnode: firmware node handle + * + * Set firmware node handle for the given dpll pin. + */ +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode) +{ + mutex_lock(&dpll_lock); + fwnode_handle_put(pin->fwnode); /* Drop fwnode previously set */ + pin->fwnode = fwnode_handle_get(fwnode); + mutex_unlock(&dpll_lock); +} +EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); + +/** + * fwnode_dpll_pin_find - find dpll pin by firmware node reference + * @fwnode: reference to firmware node + * + * Get existing object of a pin that is associated with given firmware node + * reference. + * + * Context: Acquires a lock (dpll_lock) + * Return: + * * valid dpll_pin pointer on success + * * NULL when no such pin exists + */ +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + struct dpll_pin *pin, *ret = NULL; + unsigned long index; + + mutex_lock(&dpll_lock); + xa_for_each(&dpll_pin_xa, index, pin) { + if (pin->fwnode == fwnode) { + ret = pin; + refcount_inc(&ret->refcount); + break; + } + } + mutex_unlock(&dpll_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(fwnode_dpll_pin_find); + static int __dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv, void *cookie) diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index 8ce969bbeb64e..d3e17ff0ecef0 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -42,6 +42,7 @@ struct dpll_device { * @pin_idx: index of a pin given by dev driver * @clock_id: clock_id of creator * @module: module of creator + * @fwnode: optional reference to firmware node * @dpll_refs: hold referencees to dplls pin was registered with * @parent_refs: hold references to parent pins pin was registered with * @ref_sync_pins: hold references to pins for Reference SYNC feature @@ -54,6 +55,7 @@ struct dpll_pin { u32 pin_idx; u64 clock_id; struct module *module; + struct fwnode_handle *fwnode; struct xarray dpll_refs; struct xarray parent_refs; struct xarray ref_sync_pins; diff --git a/include/linux/dpll.h b/include/linux/dpll.h index c6d0248fa5273..f2e8660e90cdf 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -16,6 +16,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; +struct fwnode_handle; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -178,6 +179,8 @@ void dpll_netdev_pin_clear(struct net_device *dev); size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); + +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -193,6 +196,12 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) { return 0; } + +static inline struct dpll_pin * +fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + return NULL; +} #endif struct dpll_device * @@ -218,6 +227,8 @@ void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); + int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:30 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Associate the registered DPLL pin with its firmware node by calling dpll_pin_fwnode_set(). This links the created pin object to its corresponding DT/ACPI node in the DPLL core. Consequently, this enables consumer drivers (such as network drivers) to locate and request this specific pin using the fwnode_dpll_pin_find() helper. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 7d8ed948b9706..9eed21088adac 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1485,6 +1485,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; } + dpll_pin_fwnode_set(pin->dpll_pin, props->fwnode); if (zl3073x_dpll_is_input_pin(pin)) ops = &zl3073x_dpll_input_pin_ops; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:31 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Petr Oros <poros@redhat.com> Currently, the DPLL subsystem reports events (creation, deletion, changes) to userspace via Netlink. However, there is no mechanism for other kernel components to be notified of these events directly. Add a raw notifier chain to the DPLL core protected by dpll_lock. This allows other kernel subsystems or drivers to register callbacks and receive notifications when DPLL devices or pins are created, deleted, or modified. Define the following: - Registration helpers: {,un}register_dpll_notifier() - Event types: DPLL_DEVICE_CREATED, DPLL_PIN_CREATED, etc. - Context structures: dpll_{device,pin}_notifier_info to pass relevant data to the listeners. The notification chain is invoked alongside the existing Netlink event generation to ensure in-kernel listeners are kept in sync with the subsystem state. Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> --- drivers/dpll/dpll_core.c | 57 +++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 4 +++ drivers/dpll/dpll_netlink.c | 6 ++++ include/linux/dpll.h | 29 +++++++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f04ed7195cadd..b05fe2ba46d91 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -23,6 +23,8 @@ DEFINE_MUTEX(dpll_lock); DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); +static RAW_NOTIFIER_HEAD(dpll_notifier_chain); + static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -46,6 +48,39 @@ struct dpll_pin_registration { void *cookie; }; +static int call_dpll_notifiers(unsigned long action, void *info) +{ + lockdep_assert_held(&dpll_lock); + return raw_notifier_call_chain(&dpll_notifier_chain, action, info); +} + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action) +{ + struct dpll_device_notifier_info info = { + .dpll = dpll, + .id = dpll->id, + .idx = dpll->device_idx, + .clock_id = dpll->clock_id, + .type = dpll->type, + }; + + call_dpll_notifiers(action, &info); +} + +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) +{ + struct dpll_pin_notifier_info info = { + .pin = pin, + .id = pin->id, + .idx = pin->pin_idx, + .clock_id = pin->clock_id, + .fwnode = pin->fwnode, + .prop = &pin->prop, + }; + + call_dpll_notifiers(action, &info); +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -539,6 +574,28 @@ void dpll_netdev_pin_clear(struct net_device *dev) } EXPORT_SYMBOL(dpll_netdev_pin_clear); +int register_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_register(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(register_dpll_notifier); + +int unregister_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_unregister(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(unregister_dpll_notifier); + /** * dpll_pin_get - find existing or create new dpll pin * @clock_id: clock_id of creator diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index d3e17ff0ecef0..b7b4bb251f739 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -91,4 +91,8 @@ struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs); extern struct xarray dpll_device_xa; extern struct xarray dpll_pin_xa; extern struct mutex dpll_lock; + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action); +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action); + #endif diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c index 904199ddd1781..83cbd64abf5a4 100644 --- a/drivers/dpll/dpll_netlink.c +++ b/drivers/dpll/dpll_netlink.c @@ -761,17 +761,20 @@ dpll_device_event_send(enum dpll_cmd event, struct dpll_device *dpll) int dpll_device_create_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CREATED); return dpll_device_event_send(DPLL_CMD_DEVICE_CREATE_NTF, dpll); } int dpll_device_delete_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_DELETED); return dpll_device_event_send(DPLL_CMD_DEVICE_DELETE_NTF, dpll); } static int __dpll_device_change_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CHANGED); return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll); } @@ -829,16 +832,19 @@ dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin) int dpll_pin_create_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CREATED); return dpll_pin_event_send(DPLL_CMD_PIN_CREATE_NTF, pin); } int dpll_pin_delete_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_DELETED); return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin); } int __dpll_pin_change_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CHANGED); return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin); } diff --git a/include/linux/dpll.h b/include/linux/dpll.h index f2e8660e90cdf..8ed90dfc65f05 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -11,6 +11,7 @@ #include <linux/device.h> #include <linux/netlink.h> #include <linux/netdevice.h> +#include <linux/notifier.h> #include <linux/rtnetlink.h> struct dpll_device; @@ -172,6 +173,30 @@ struct dpll_pin_properties { u32 phase_gran; }; +#define DPLL_DEVICE_CREATED 1 +#define DPLL_DEVICE_DELETED 2 +#define DPLL_DEVICE_CHANGED 3 +#define DPLL_PIN_CREATED 4 +#define DPLL_PIN_DELETED 5 +#define DPLL_PIN_CHANGED 6 + +struct dpll_device_notifier_info { + struct dpll_device *dpll; + u32 id; + u32 idx; + u64 clock_id; + enum dpll_type type; +}; + +struct dpll_pin_notifier_info { + struct dpll_pin *pin; + u32 id; + u32 idx; + u64 clock_id; + const struct fwnode_handle *fwnode; + const struct dpll_pin_properties *prop; +}; + #if IS_ENABLED(CONFIG_DPLL) void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin); void dpll_netdev_pin_clear(struct net_device *dev); @@ -242,4 +267,8 @@ int dpll_device_change_ntf(struct dpll_device *dpll); int dpll_pin_change_ntf(struct dpll_pin *pin); +int register_dpll_notifier(struct notifier_block *nb); + +int unregister_dpll_notifier(struct notifier_block *nb); + #endif -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:32 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Allow drivers to register DPLL pins without manually specifying a pin index. Currently, drivers must provide a unique pin index when calling dpll_pin_get(). This works well for hardware-mapped pins but creates friction for drivers handling virtual pins or those without a strict hardware indexing scheme. Introduce DPLL_PIN_IDX_UNSPEC (U32_MAX). When a driver passes this value as the pin index: 1. The core allocates a unique index using an IDA 2. The allocated index is mapped to a range starting above `INT_MAX` This separation ensures that dynamically allocated indices never collide with standard driver-provided hardware indices, which are assumed to be within the `0` to `INT_MAX` range. The index is automatically freed when the pin is released in dpll_pin_put(). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v2: * fixed integer overflow in dpll_pin_idx_free() --- drivers/dpll/dpll_core.c | 48 ++++++++++++++++++++++++++++++++++++++-- include/linux/dpll.h | 2 ++ 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index b05fe2ba46d91..59081cf2c73ae 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/idr.h> #include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -24,6 +25,7 @@ DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); static RAW_NOTIFIER_HEAD(dpll_notifier_chain); +static DEFINE_IDA(dpll_pin_idx_ida); static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -464,6 +466,36 @@ void dpll_device_unregister(struct dpll_device *dpll, } EXPORT_SYMBOL_GPL(dpll_device_unregister); +static int dpll_pin_idx_alloc(u32 *pin_idx) +{ + int ret; + + if (!pin_idx) + return -EINVAL; + + /* Alloc unique number from IDA. Number belongs to <0, INT_MAX> range */ + ret = ida_alloc(&dpll_pin_idx_ida, GFP_KERNEL); + if (ret < 0) + return ret; + + /* Map the value to dynamic pin index range <INT_MAX+1, U32_MAX> */ + *pin_idx = (u32)ret + INT_MAX + 1; + + return 0; +} + +static void dpll_pin_idx_free(u32 pin_idx) +{ + if (pin_idx <= INT_MAX) + return; /* Not a dynamic pin index */ + + /* Map the index value from dynamic pin index range to IDA range and + * free it. + */ + pin_idx -= (u32)INT_MAX + 1; + ida_free(&dpll_pin_idx_ida, pin_idx); +} + static void dpll_pin_prop_free(struct dpll_pin_properties *prop) { kfree(prop->package_label); @@ -521,9 +553,18 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, struct dpll_pin *pin; int ret; + if (pin_idx == DPLL_PIN_IDX_UNSPEC) { + ret = dpll_pin_idx_alloc(&pin_idx); + if (ret) + return ERR_PTR(ret); + } else if (pin_idx > INT_MAX) { + return ERR_PTR(-EINVAL); + } pin = kzalloc(sizeof(*pin), GFP_KERNEL); - if (!pin) - return ERR_PTR(-ENOMEM); + if (!pin) { + ret = -ENOMEM; + goto err_pin_alloc; + } pin->pin_idx = pin_idx; pin->clock_id = clock_id; pin->module = module; @@ -551,6 +592,8 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, dpll_pin_prop_free(&pin->prop); err_pin_prop: kfree(pin); +err_pin_alloc: + dpll_pin_idx_free(pin_idx); return ERR_PTR(ret); } @@ -654,6 +697,7 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8ed90dfc65f05..8fff048131f1d 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -240,6 +240,8 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, void dpll_device_unregister(struct dpll_device *dpll, const struct dpll_device_ops *ops, void *priv); +#define DPLL_PIN_IDX_UNSPEC U32_MAX + struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, const struct dpll_pin_properties *prop); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:33 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add parsing for the "mux" string in the 'connection-type' pin property mapping it to DPLL_PIN_TYPE_MUX. Recognizing this type in the driver allows these pins to be taken as parent pins for pin-on-pin pins coming from different modules (e.g. network drivers). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/prop.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dpll/zl3073x/prop.c b/drivers/dpll/zl3073x/prop.c index 4ed153087570b..ad1f099cbe2b5 100644 --- a/drivers/dpll/zl3073x/prop.c +++ b/drivers/dpll/zl3073x/prop.c @@ -249,6 +249,8 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev, props->dpll_props.type = DPLL_PIN_TYPE_INT_OSCILLATOR; else if (!strcmp(type, "synce")) props->dpll_props.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + else if (!strcmp(type, "mux")) + props->dpll_props.type = DPLL_PIN_TYPE_MUX; else dev_warn(zldev->dev, "Unknown or unsupported pin type '%s'\n", -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:34 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Refactor the reference counting mechanism for DPLL devices and pins to improve consistency and prevent potential lifetime issues. Introduce internal helpers __dpll_{device,pin}_{hold,put}() to centralize reference management. Update the internal XArray reference helpers (dpll_xa_ref_*) to automatically grab a reference to the target object when it is added to a list, and release it when removed. This ensures that objects linked internally (e.g., pins referenced by parent pins) are properly kept alive without relying on the caller to manually manage the count. Consequently, remove the now redundant manual `refcount_inc/dec` calls in dpll_pin_on_pin_{,un}register()`, as ownership is now correctly handled by the dpll_xa_ref_* functions. Additionally, ensure that dpll_device_{,un}register()` takes/releases a reference to the device, ensuring the device object remains valid for the duration of its registration. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/dpll_core.c | 74 +++++++++++++++++++++++++++------------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 59081cf2c73ae..f6ab4f0cad84d 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -83,6 +83,45 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } +static void __dpll_device_hold(struct dpll_device *dpll) +{ + refcount_inc(&dpll->refcount); +} + +static void __dpll_device_put(struct dpll_device *dpll) +{ + if (refcount_dec_and_test(&dpll->refcount)) { + ASSERT_DPLL_NOT_REGISTERED(dpll); + WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); + xa_destroy(&dpll->pin_refs); + xa_erase(&dpll_device_xa, dpll->id); + WARN_ON(!list_empty(&dpll->registration_list)); + kfree(dpll); + } +} + +static void __dpll_pin_hold(struct dpll_pin *pin) +{ + refcount_inc(&pin->refcount); +} + +static void dpll_pin_idx_free(u32 pin_idx); +static void dpll_pin_prop_free(struct dpll_pin_properties *prop); + +static void __dpll_pin_put(struct dpll_pin *pin) +{ + if (refcount_dec_and_test(&pin->refcount)) { + xa_erase(&dpll_pin_xa, pin->id); + xa_destroy(&pin->dpll_refs); + xa_destroy(&pin->parent_refs); + xa_destroy(&pin->ref_sync_pins); + dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); + kfree_rcu(pin, rcu); + } +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -152,6 +191,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_pin_hold(pin); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -174,6 +214,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); + __dpll_pin_put(pin); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -231,6 +272,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_device_hold(dpll); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -253,6 +295,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -323,8 +366,8 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { + __dpll_device_hold(dpll); ret = dpll; - refcount_inc(&ret->refcount); break; } } @@ -347,14 +390,7 @@ EXPORT_SYMBOL_GPL(dpll_device_get); void dpll_device_put(struct dpll_device *dpll) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&dpll->refcount)) { - ASSERT_DPLL_NOT_REGISTERED(dpll); - WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); - xa_destroy(&dpll->pin_refs); - xa_erase(&dpll_device_xa, dpll->id); - WARN_ON(!list_empty(&dpll->registration_list)); - kfree(dpll); - } + __dpll_device_put(dpll); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -416,6 +452,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; + __dpll_device_hold(dpll); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -455,6 +492,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -666,8 +704,8 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { + __dpll_pin_hold(pos); ret = pos; - refcount_inc(&ret->refcount); break; } } @@ -690,16 +728,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); void dpll_pin_put(struct dpll_pin *pin) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&pin->refcount)) { - xa_erase(&dpll_pin_xa, pin->id); - xa_destroy(&pin->dpll_refs); - xa_destroy(&pin->parent_refs); - xa_destroy(&pin->ref_sync_pins); - dpll_pin_prop_free(&pin->prop); - fwnode_handle_put(pin->fwnode); - dpll_pin_idx_free(pin->pin_idx); - kfree_rcu(pin, rcu); - } + __dpll_pin_put(pin); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -740,8 +769,8 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { + __dpll_pin_hold(pin); ret = pin; - refcount_inc(&ret->refcount); break; } } @@ -893,7 +922,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv, pin); if (ret) goto unlock; - refcount_inc(&pin->refcount); xa_for_each(&parent->dpll_refs, i, ref) { ret = __dpll_pin_register(ref->dpll, pin, ops, priv, parent); if (ret) { @@ -913,7 +941,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, parent); dpll_pin_delete_ntf(pin); } - refcount_dec(&pin->refcount); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); unlock: mutex_unlock(&dpll_lock); @@ -940,7 +967,6 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin, mutex_lock(&dpll_lock); dpll_pin_delete_ntf(pin); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); - refcount_dec(&pin->refcount); xa_for_each(&pin->dpll_refs, i, ref) __dpll_pin_unregister(ref->dpll, pin, ops, priv, parent); mutex_unlock(&dpll_lock); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:35 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add support for the REF_TRACKER infrastructure to the DPLL subsystem. When enabled, this allows developers to track and debug reference counting leaks or imbalances for dpll_device and dpll_pin objects. It records stack traces for every get/put operation and exposes this information via debugfs at: /sys/kernel/debug/ref_tracker/dpll_device_* /sys/kernel/debug/ref_tracker/dpll_pin_* The following API changes are made to support this: 1. dpll_device_get() / dpll_device_put() now accept a 'dpll_tracker *' (which is a typedef to 'struct ref_tracker *' when enabled, or an empty struct otherwise). 2. dpll_pin_get() / dpll_pin_put() and fwnode_dpll_pin_find() similarly accept the tracker argument. 3. Internal registration structures now hold a tracker to associate the reference held by the registration with the specific owner. All existing in-tree drivers (ice, mlx5, ptp_ocp, zl3073x) are updated to pass NULL for the new tracker argument, maintaining current behavior while enabling future debugging capabilities. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Petr Oros <poros@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * added missing tracker parameter to fwnode_dpll_pin_find() stub v3: * added Kconfig dependency on STACKTRACE_SUPPORT and DEBUG_KERNEL --- drivers/dpll/Kconfig | 15 +++ drivers/dpll/dpll_core.c | 98 ++++++++++++++----- drivers/dpll/dpll_core.h | 5 + drivers/dpll/zl3073x/dpll.c | 12 +-- drivers/net/ethernet/intel/ice/ice_dpll.c | 14 +-- .../net/ethernet/mellanox/mlx5/core/dpll.c | 13 +-- drivers/ptp/ptp_ocp.c | 15 +-- include/linux/dpll.h | 21 ++-- 8 files changed, 139 insertions(+), 54 deletions(-) diff --git a/drivers/dpll/Kconfig b/drivers/dpll/Kconfig index ade872c915ac6..be98969f040ab 100644 --- a/drivers/dpll/Kconfig +++ b/drivers/dpll/Kconfig @@ -8,6 +8,21 @@ menu "DPLL device support" config DPLL bool +config DPLL_REFCNT_TRACKER + bool "DPLL reference count tracking" + depends on DEBUG_KERNEL && STACKTRACE_SUPPORT && DPLL + select REF_TRACKER + help + Enable reference count tracking for DPLL devices and pins. + This helps debugging reference leaks and use-after-free bugs + by recording stack traces for each get/put operation. + + The tracking information is exposed via debugfs at: + /sys/kernel/debug/ref_tracker/dpll_device_* + /sys/kernel/debug/ref_tracker/dpll_pin_* + + If unsure, say N. + source "drivers/dpll/zl3073x/Kconfig" endmenu diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f6ab4f0cad84d..627a5b39a0efd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -41,6 +41,7 @@ struct dpll_device_registration { struct list_head list; const struct dpll_device_ops *ops; void *priv; + dpll_tracker tracker; }; struct dpll_pin_registration { @@ -48,6 +49,7 @@ struct dpll_pin_registration { const struct dpll_pin_ops *ops; void *priv; void *cookie; + dpll_tracker tracker; }; static int call_dpll_notifiers(unsigned long action, void *info) @@ -83,33 +85,68 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } -static void __dpll_device_hold(struct dpll_device *dpll) +static void dpll_device_tracker_alloc(struct dpll_device *dpll, + dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&dpll->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_device_tracker_free(struct dpll_device *dpll, + dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&dpll->refcnt_tracker, tracker); +#endif +} + +static void __dpll_device_hold(struct dpll_device *dpll, dpll_tracker *tracker) +{ + dpll_device_tracker_alloc(dpll, tracker); refcount_inc(&dpll->refcount); } -static void __dpll_device_put(struct dpll_device *dpll) +static void __dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { + dpll_device_tracker_free(dpll, tracker); if (refcount_dec_and_test(&dpll->refcount)) { ASSERT_DPLL_NOT_REGISTERED(dpll); WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); xa_destroy(&dpll->pin_refs); xa_erase(&dpll_device_xa, dpll->id); WARN_ON(!list_empty(&dpll->registration_list)); + ref_tracker_dir_exit(&dpll->refcnt_tracker); kfree(dpll); } } -static void __dpll_pin_hold(struct dpll_pin *pin) +static void dpll_pin_tracker_alloc(struct dpll_pin *pin, dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&pin->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_pin_tracker_free(struct dpll_pin *pin, dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&pin->refcnt_tracker, tracker); +#endif +} + +static void __dpll_pin_hold(struct dpll_pin *pin, dpll_tracker *tracker) +{ + dpll_pin_tracker_alloc(pin, tracker); refcount_inc(&pin->refcount); } static void dpll_pin_idx_free(u32 pin_idx); static void dpll_pin_prop_free(struct dpll_pin_properties *prop); -static void __dpll_pin_put(struct dpll_pin *pin) +static void __dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { + dpll_pin_tracker_free(pin, tracker); if (refcount_dec_and_test(&pin->refcount)) { xa_erase(&dpll_pin_xa, pin->id); xa_destroy(&pin->dpll_refs); @@ -118,6 +155,7 @@ static void __dpll_pin_put(struct dpll_pin *pin) dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); dpll_pin_idx_free(pin->pin_idx); + ref_tracker_dir_exit(&pin->refcnt_tracker); kfree_rcu(pin, rcu); } } @@ -191,7 +229,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -214,7 +252,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); - __dpll_pin_put(pin); + __dpll_pin_put(pin, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -272,7 +310,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -295,7 +333,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -337,6 +375,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) return ERR_PTR(ret); } xa_init_flags(&dpll->pin_refs, XA_FLAGS_ALLOC); + ref_tracker_dir_init(&dpll->refcnt_tracker, 128, "dpll_device"); return dpll; } @@ -346,6 +385,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * @clock_id: clock_id of creator * @device_idx: idx given by device driver * @module: reference to registering module + * @tracker: tracking object for the acquired reference * * Get existing object of a dpll device, unique for given arguments. * Create new if doesn't exist yet. @@ -356,7 +396,8 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * * ERR_PTR(X) - error */ struct dpll_device * -dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) +dpll_device_get(u64 clock_id, u32 device_idx, struct module *module, + dpll_tracker *tracker) { struct dpll_device *dpll, *ret = NULL; unsigned long index; @@ -366,13 +407,17 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, tracker); ret = dpll; break; } } - if (!ret) + if (!ret) { ret = dpll_device_alloc(clock_id, device_idx, module); + if (!IS_ERR(ret)) + dpll_device_tracker_alloc(ret, tracker); + } + mutex_unlock(&dpll_lock); return ret; @@ -382,15 +427,16 @@ EXPORT_SYMBOL_GPL(dpll_device_get); /** * dpll_device_put - decrease the refcount and free memory if possible * @dpll: dpll_device struct pointer + * @tracker: tracking object for the acquired reference * * Context: Acquires a lock (dpll_lock) * Drop reference for a dpll device, if all references are gone, delete * dpll device object. */ -void dpll_device_put(struct dpll_device *dpll) +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_device_put(dpll); + __dpll_device_put(dpll, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -452,7 +498,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -492,7 +538,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -622,6 +668,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, &dpll_pin_xa_id, GFP_KERNEL); if (ret < 0) goto err_xa_alloc; + ref_tracker_dir_init(&pin->refcnt_tracker, 128, "dpll_pin"); return pin; err_xa_alloc: xa_destroy(&pin->dpll_refs); @@ -683,6 +730,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); * @pin_idx: idx given by dev driver * @module: reference to registering module * @prop: dpll pin properties + * @tracker: tracking object for the acquired reference * * Get existing object of a pin (unique for given arguments) or create new * if doesn't exist yet. @@ -694,7 +742,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); */ struct dpll_pin * dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, - const struct dpll_pin_properties *prop) + const struct dpll_pin_properties *prop, dpll_tracker *tracker) { struct dpll_pin *pos, *ret = NULL; unsigned long i; @@ -704,13 +752,16 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { - __dpll_pin_hold(pos); + __dpll_pin_hold(pos, tracker); ret = pos; break; } } - if (!ret) + if (!ret) { ret = dpll_pin_alloc(clock_id, pin_idx, module, prop); + if (!IS_ERR(ret)) + dpll_pin_tracker_alloc(ret, tracker); + } mutex_unlock(&dpll_lock); return ret; @@ -720,15 +771,16 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); /** * dpll_pin_put - decrease the refcount and free memory if possible * @pin: pointer to a pin to be put + * @tracker: tracking object for the acquired reference * * Drop reference for a pin, if all references are gone, delete pin object. * * Context: Acquires a lock (dpll_lock) */ -void dpll_pin_put(struct dpll_pin *pin) +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_pin_put(pin); + __dpll_pin_put(pin, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -752,6 +804,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); /** * fwnode_dpll_pin_find - find dpll pin by firmware node reference * @fwnode: reference to firmware node + * @tracker: tracking object for the acquired reference * * Get existing object of a pin that is associated with given firmware node * reference. @@ -761,7 +814,8 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); * * valid dpll_pin pointer on success * * NULL when no such pin exists */ -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker) { struct dpll_pin *pin, *ret = NULL; unsigned long index; @@ -769,7 +823,7 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, tracker); ret = pin; break; } diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index b7b4bb251f739..71ac88ef20172 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -10,6 +10,7 @@ #include <linux/dpll.h> #include <linux/list.h> #include <linux/refcount.h> +#include <linux/ref_tracker.h> #include "dpll_nl.h" #define DPLL_REGISTERED XA_MARK_1 @@ -23,6 +24,7 @@ * @type: type of a dpll * @pin_refs: stores pins registered within a dpll * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @registration_list: list of registered ops and priv data of dpll owners **/ struct dpll_device { @@ -33,6 +35,7 @@ struct dpll_device { enum dpll_type type; struct xarray pin_refs; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct list_head registration_list; }; @@ -48,6 +51,7 @@ struct dpll_device { * @ref_sync_pins: hold references to pins for Reference SYNC feature * @prop: pin properties copied from the registerer * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @rcu: rcu_head for kfree_rcu() **/ struct dpll_pin { @@ -61,6 +65,7 @@ struct dpll_pin { struct xarray ref_sync_pins; struct dpll_pin_properties prop; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct rcu_head rcu; }; diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 9eed21088adac..8788bcab7ec53 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1480,7 +1480,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props); + &props->dpll_props, NULL); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1503,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1534,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); pin->dpll_pin = NULL; } @@ -1708,7 +1708,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE); + THIS_MODULE, NULL); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1720,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } @@ -1743,7 +1743,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 53b54e395a2ed..64b7b045ecd58 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop); + &pins[i].prop, NULL); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin); + dpll_pin_put(rclk->pin, NULL); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); } /** @@ -3271,7 +3271,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3287,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 3ea8a1766ae28..541d83e5d7183 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -438,7 +438,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -451,7 +451,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), - THIS_MODULE, &mlx5_dpll_pin_properties); + THIS_MODULE, &mlx5_dpll_pin_properties, + NULL); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -479,11 +480,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); err_free_mdpll: kfree(mdpll); return err; @@ -499,9 +500,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index 65fe05cac8c42..f39b3966b3e8c 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -4788,7 +4788,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4800,7 +4800,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out; for (i = 0; i < OCP_SMA_NUM; i++) { - bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop); + bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, + &bp->sma[i].dpll_prop, NULL); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4809,7 +4810,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); goto out_dpll; } } @@ -4819,9 +4820,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); out: ptp_ocp_detach(bp); out_disable: @@ -4842,11 +4843,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8fff048131f1d..5c80cdab0c180 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -18,6 +18,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; struct fwnode_handle; +struct ref_tracker; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -173,6 +174,12 @@ struct dpll_pin_properties { u32 phase_gran; }; +#ifdef CONFIG_DPLL_REFCNT_TRACKER +typedef struct ref_tracker *dpll_tracker; +#else +typedef struct {} dpll_tracker; +#endif + #define DPLL_DEVICE_CREATED 1 #define DPLL_DEVICE_DELETED 2 #define DPLL_DEVICE_CHANGED 3 @@ -205,7 +212,8 @@ size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -223,16 +231,17 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) } static inline struct dpll_pin * -fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +fwnode_dpll_pin_find(struct fwnode_handle *fwnode, dpll_tracker *tracker); { return NULL; } #endif struct dpll_device * -dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module); +dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module, + dpll_tracker *tracker); -void dpll_device_put(struct dpll_device *dpll); +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker); int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, const struct dpll_device_ops *ops, void *priv); @@ -244,7 +253,7 @@ void dpll_device_unregister(struct dpll_device *dpll, struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, - const struct dpll_pin_properties *prop); + const struct dpll_pin_properties *prop, dpll_tracker *tracker); int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); @@ -252,7 +261,7 @@ int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker); void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:36 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Update existing DPLL drivers to utilize the DPLL reference count tracking infrastructure. Add dpll_tracker fields to the drivers' internal device and pin structures. Pass pointers to these trackers when calling dpll_device_get/put() and dpll_pin_get/put(). This allows developers to inspect the specific references held by this driver via debugfs when CONFIG_DPLL_REFCNT_TRACKER is enabled, aiding in the debugging of resource leaks. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 14 ++++++++------ drivers/dpll/zl3073x/dpll.h | 2 ++ drivers/net/ethernet/intel/ice/ice_dpll.c | 15 ++++++++------- drivers/net/ethernet/intel/ice/ice_dpll.h | 4 ++++ drivers/net/ethernet/mellanox/mlx5/core/dpll.c | 15 +++++++++------ drivers/ptp/ptp_ocp.c | 17 ++++++++++------- 6 files changed, 41 insertions(+), 26 deletions(-) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 8788bcab7ec53..a99d143a7acde 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -29,6 +29,7 @@ * @list: this DPLL pin list entry * @dpll: DPLL the pin is registered to * @dpll_pin: pointer to registered dpll_pin + * @tracker: tracking object for the acquired reference * @label: package label * @dir: pin direction * @id: pin id @@ -44,6 +45,7 @@ struct zl3073x_dpll_pin { struct list_head list; struct zl3073x_dpll *dpll; struct dpll_pin *dpll_pin; + dpll_tracker tracker; char label[8]; enum dpll_pin_direction dir; u8 id; @@ -1480,7 +1482,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props, NULL); + &props->dpll_props, &pin->tracker); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1505,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1536,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); pin->dpll_pin = NULL; } @@ -1708,7 +1710,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE, NULL); + THIS_MODULE, &zldpll->tracker); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1722,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } @@ -1743,7 +1745,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } diff --git a/drivers/dpll/zl3073x/dpll.h b/drivers/dpll/zl3073x/dpll.h index e8c39b44b356c..c65c798c37927 100644 --- a/drivers/dpll/zl3073x/dpll.h +++ b/drivers/dpll/zl3073x/dpll.h @@ -18,6 +18,7 @@ * @check_count: periodic check counter * @phase_monitor: is phase offset monitor enabled * @dpll_dev: pointer to registered DPLL device + * @tracker: tracking object for the acquired reference * @lock_status: last saved DPLL lock status * @pins: list of pins * @change_work: device change notification work @@ -31,6 +32,7 @@ struct zl3073x_dpll { u8 check_count; bool phase_monitor; struct dpll_device *dpll_dev; + dpll_tracker tracker; enum dpll_lock_status lock_status; struct list_head pins; struct work_struct change_work; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 64b7b045ecd58..4eca62688d834 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop, NULL); + &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin, NULL); + dpll_pin_put(rclk->pin, &rclk->tracker); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); } /** @@ -3271,7 +3271,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, + &d->tracker); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3288,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index c0da03384ce91..63fac6510df6e 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -23,6 +23,7 @@ enum ice_dpll_pin_sw { /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin + * @tracker: reference count tracker * @idx: ice pin private idx * @num_parents: hols number of parent pins * @parent_idx: hold indexes of parent pins @@ -37,6 +38,7 @@ enum ice_dpll_pin_sw { struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; + dpll_tracker tracker; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -58,6 +60,7 @@ struct ice_dpll_pin { /** ice_dpll - store info required for DPLL control * @dpll: pointer to dpll dev * @pf: pointer to pf, which has registered the dpll_device + * @tracker: reference count tracker * @dpll_idx: index of dpll on the NIC * @input_idx: currently selected input index * @prev_input_idx: previously selected input index @@ -76,6 +79,7 @@ struct ice_dpll_pin { struct ice_dpll { struct dpll_device *dpll; struct ice_pf *pf; + dpll_tracker tracker; u8 dpll_idx; u8 input_idx; u8 prev_input_idx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 541d83e5d7183..3981dd81d4c17 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -9,7 +9,9 @@ */ struct mlx5_dpll { struct dpll_device *dpll; + dpll_tracker dpll_tracker; struct dpll_pin *dpll_pin; + dpll_tracker pin_tracker; struct mlx5_core_dev *mdev; struct workqueue_struct *wq; struct delayed_work work; @@ -438,7 +440,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, + &mdpll->dpll_tracker); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -452,7 +455,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), THIS_MODULE, &mlx5_dpll_pin_properties, - NULL); + &mdpll->pin_tracker); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -480,11 +483,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); err_free_mdpll: kfree(mdpll); return err; @@ -500,9 +503,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index f39b3966b3e8c..1b16a9c3d7fdc 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -285,6 +285,7 @@ struct ptp_ocp_sma_connector { u8 default_fcn; struct dpll_pin *dpll_pin; struct dpll_pin_properties dpll_prop; + dpll_tracker tracker; }; struct ocp_attr_group { @@ -383,6 +384,7 @@ struct ptp_ocp { struct ptp_ocp_sma_connector sma[OCP_SMA_NUM]; const struct ocp_sma_op *sma_op; struct dpll_device *dpll; + dpll_tracker tracker; int signals_nr; int freq_in_nr; }; @@ -4788,7 +4790,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, &bp->tracker); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4801,7 +4803,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) for (i = 0; i < OCP_SMA_NUM; i++) { bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, - &bp->sma[i].dpll_prop, NULL); + &bp->sma[i].dpll_prop, + &bp->sma[i].tracker); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4810,7 +4813,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); goto out_dpll; } } @@ -4820,9 +4823,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); out: ptp_ocp_detach(bp); out_disable: @@ -4843,11 +4846,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:37 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Implement SyncE support for the E825-C Ethernet controller using the DPLL subsystem. Unlike E810, the E825-C architecture relies on platform firmware (ACPI) to describe connections between the NIC's recovered clock outputs and external DPLL inputs. Implement the following mechanisms to support this architecture: 1. Discovery Mechanism: The driver parses the 'dpll-pins' and 'dpll-pin names' firmware properties to identify the external DPLL pins (parents) corresponding to its RCLK outputs ("rclk0", "rclk1"). It uses fwnode_dpll_pin_find() to locate these parent pins in the DPLL core. 2. Asynchronous Registration: Since the platform DPLL driver (e.g. zl3073x) may probe independently of the network driver, utilize the DPLL notifier chain The driver listens for DPLL_PIN_CREATED events to detect when the parent MUX pins become available, then registers its own Recovered Clock (RCLK) pins as children of those parents. 3. Hardware Configuration: Implement the specific register access logic for E825-C CGU (Clock Generation Unit) registers (R10, R11). This includes configuring the bypass MUXes and clock dividers required to drive SyncE signals. 4. Split Initialization: Refactor `ice_dpll_init()` to separate the static initialization path of E810 from the dynamic, firmware-driven path required for E825-C. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> --- v3: * DPLL init check in ice_ptp_link_change() * using completion for dpll initization to avoid races with DPLL notifier scheduled works * added parsing of dpll-pin-names and dpll-pins properties v2: * fixed error path in ice_dpll_init_pins_e825() * fixed misleading comment referring 'device tree' --- drivers/net/ethernet/intel/ice/ice_dpll.c | 742 +++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 26 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 ++++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + 8 files changed, 956 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 4eca62688d834..a8c99e49bfae6 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -5,6 +5,7 @@ #include "ice_lib.h" #include "ice_trace.h" #include <linux/dpll.h> +#include <linux/property.h> #define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50 #define ICE_DPLL_PIN_IDX_INVALID 0xff @@ -528,6 +529,92 @@ ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin, return ret; } +/** + * ice_dpll_pin_store_state - updates the state of pin in SW bookkeeping + * @pin: pointer to a pin + * @parent: parent pin index + * @state: pin state (connected or disconnected) + */ +static void +ice_dpll_pin_store_state(struct ice_dpll_pin *pin, int parent, bool state) +{ + pin->state[parent] = state ? DPLL_PIN_STATE_CONNECTED : + DPLL_PIN_STATE_DISCONNECTED; +} + +/** + * ice_dpll_rclk_update_e825c - updates the state of rclk pin on e825c device + * @pf: private board struct + * @pin: pointer to a pin + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update_e825c(struct ice_pf *pf, + struct ice_dpll_pin *pin) +{ + u8 rclk_bits; + int err; + u32 reg; + + if (pf->dplls.rclk.num_parents > ICE_SYNCE_CLK_NUM) + return -EINVAL; + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R10, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK0, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R11, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK1, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + return 0; +} + +/** + * ice_dpll_rclk_update - updates the state of rclk pin on a device + * @pf: private board struct + * @pin: pointer to a pin + * @port_num: port number + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update(struct ice_pf *pf, struct ice_dpll_pin *pin, + u8 port_num) +{ + int ret; + + for (u8 parent = 0; parent < pf->dplls.rclk.num_parents; parent++) { + ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &parent, &port_num, + &pin->flags[parent], NULL); + if (ret) + return ret; + + ice_dpll_pin_store_state(pin, parent, + ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & + pin->flags[parent]); + } + + return 0; +} + /** * ice_dpll_sw_pins_update - update status of all SW pins * @pf: private board struct @@ -668,22 +755,14 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin, } break; case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - for (parent = 0; parent < pf->dplls.rclk.num_parents; - parent++) { - u8 p = parent; - - ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p, - &port_num, - &pin->flags[parent], - NULL); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) { + ret = ice_dpll_rclk_update_e825c(pf, pin); + if (ret) + goto err; + } else { + ret = ice_dpll_rclk_update(pf, pin, port_num); if (ret) goto err; - if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & - pin->flags[parent]) - pin->state[parent] = DPLL_PIN_STATE_CONNECTED; - else - pin->state[parent] = - DPLL_PIN_STATE_DISCONNECTED; } break; case ICE_DPLL_PIN_TYPE_SOFTWARE: @@ -1842,6 +1921,40 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv, return 0; } +/** + * ice_dpll_synce_update_e825c - setting PHY recovered clock pins on e825c + * @hw: Pointer to the HW struct + * @ena: true if enable, false in disable + * @port_num: port number + * @output: output pin, we have two in E825C + * + * DPLL subsystem callback. Set proper signals to recover clock from port. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +static int ice_dpll_synce_update_e825c(struct ice_hw *hw, bool ena, + u32 port_num, enum ice_synce_clk output) +{ + int err; + + /* configure the mux to deliver proper signal to DPLL from the MUX */ + err = ice_tspll_cfg_bypass_mux_e825c(hw, ena, port_num, output); + if (err) + return err; + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, output); + if (err) + return err; + + dev_dbg(ice_hw_to_dev(hw), "CLK_SYNCE%u recovered clock: pin %s\n", + output, str_enabled_disabled(ena)); + + return 0; +} + /** * ice_dpll_output_esync_set - callback for setting embedded sync * @pin: pointer to a pin @@ -2263,6 +2376,28 @@ ice_dpll_sw_input_ref_sync_get(const struct dpll_pin *pin, void *pin_priv, state, extack); } +static int +ice_dpll_pin_get_parent_num(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int i; + + for (i = 0; i < pin->num_parents; i++) + if (pin->pf->dplls.inputs[pin->parent_idx[i]].pin == parent) + return i; + + return -ENOENT; +} + +static int +ice_dpll_pin_get_parent_idx(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int num = ice_dpll_pin_get_parent_num(pin, parent); + + return num < 0 ? num : pin->parent_idx[num]; +} + /** * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin * @pin: pointer to a pin @@ -2286,35 +2421,44 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; bool enable = state == DPLL_PIN_STATE_CONNECTED; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; + struct ice_hw *hw; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; + + hw = &pf->hw; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) || (!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) { NL_SET_ERR_MSG_FMT(extack, "pin:%u state:%u on parent:%u already set", - p->idx, state, parent->idx); + p->idx, state, + ice_dpll_pin_get_parent_num(p, parent_pin)); goto unlock; } - ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable, - &p->freq); + + ret = hw->mac_type == ICE_MAC_GENERIC_3K_E825 ? + ice_dpll_synce_update_e825c(hw, enable, + pf->ptp.port.port_num, + (enum ice_synce_clk)hw_idx) : + ice_aq_set_phy_rec_clk_out(hw, hw_idx, enable, &p->freq); if (ret) NL_SET_ERR_MSG_FMT(extack, "err:%d %s failed to set pin state:%u for pin:%u on parent:%u", ret, - libie_aq_str(pf->hw.adminq.sq_last_status), - state, p->idx, parent->idx); + libie_aq_str(hw->adminq.sq_last_status), + state, p->idx, + ice_dpll_pin_get_parent_num(p, parent_pin)); unlock: mutex_unlock(&pf->dplls.lock); @@ -2344,17 +2488,17 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state *state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT, @@ -2814,7 +2958,8 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, &pins[i].tracker); + if (!IS_ERR_OR_NULL(pins[i].pin)) + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2836,10 +2981,14 @@ static int ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, int start_idx, int count, u64 clock_id) { + u32 pin_index; int i, ret; for (i = 0; i < count; i++) { - pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, + pin_index = start_idx; + if (start_idx != DPLL_PIN_IDX_UNSPEC) + pin_index += i; + pins[i].pin = dpll_pin_get(clock_id, pin_index, THIS_MODULE, &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); @@ -2944,6 +3093,7 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, /** * ice_dpll_deinit_direct_pins - deinitialize direct pins + * @pf: board private structure * @cgu: if cgu is present and controlled by this NIC * @pins: pointer to pins array * @count: number of pins @@ -2955,7 +3105,8 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, * Release pins resources to the dpll subsystem. */ static void -ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count, +ice_dpll_deinit_direct_pins(struct ice_pf *pf, bool cgu, + struct ice_dpll_pin *pins, int count, const struct dpll_pin_ops *ops, struct dpll_device *first, struct dpll_device *second) @@ -3024,14 +3175,14 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) { struct ice_dpll_pin *rclk = &pf->dplls.rclk; struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int i; for (i = 0; i < rclk->num_parents; i++) { - parent = pf->dplls.inputs[rclk->parent_idx[i]].pin; - if (!parent) + parent = &pf->dplls.inputs[rclk->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) continue; - dpll_pin_on_pin_unregister(parent, rclk->pin, + dpll_pin_on_pin_unregister(parent->pin, rclk->pin, &ice_dpll_rclk_ops, rclk); } if (WARN_ON_ONCE(!vsi || !vsi->netdev)) @@ -3040,60 +3191,213 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) dpll_pin_put(rclk->pin, &rclk->tracker); } +static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin) +{ + return !IS_ERR_OR_NULL(pin->fwnode); +} + +static void ice_dpll_pin_notify_work(struct work_struct *work) +{ + struct ice_dpll_pin_work *w = container_of(work, + struct ice_dpll_pin_work, + work); + struct ice_dpll_pin *pin, *parent = w->pin; + struct ice_pf *pf = parent->pf; + int ret; + + wait_for_completion(&pf->dplls.dpll_init); + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; /* DPLL initialization failed */ + + switch (w->action) { + case DPLL_PIN_CREATED: + if (!IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin registered */ + goto out; + } + + /* Grab reference on fwnode pin */ + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_err(ice_pf_to_dev(pf), + "Cannot get fwnode pin reference\n"); + goto out; + } + + /* Register rclk pin */ + pin = &pf->dplls.rclk; + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to register pin: %pe\n", ERR_PTR(ret)); + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + goto out; + } + break; + case DPLL_PIN_DELETED: + if (IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin unregistered */ + goto out; + } + + /* Unregister rclk pin */ + pin = &pf->dplls.rclk; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + + /* Drop fwnode pin reference */ + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + break; + default: + break; + } +out: + kfree(w); +} + +static int ice_dpll_pin_notify(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct ice_dpll_pin *pin = container_of(nb, struct ice_dpll_pin, nb); + struct dpll_pin_notifier_info *info = data; + struct ice_dpll_pin_work *work; + + if (action != DPLL_PIN_CREATED && action != DPLL_PIN_DELETED) + return NOTIFY_DONE; + + /* Check if the reported pin is this one */ + if (pin->fwnode != info->fwnode) + return NOTIFY_DONE; /* Not this pin */ + + work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!work) + return NOTIFY_DONE; + + INIT_WORK(&work->work, ice_dpll_pin_notify_work); + work->action = action; + work->pin = pin; + + queue_work(pin->pf->dplls.wq, &work->work); + + return NOTIFY_OK; +} + /** - * ice_dpll_init_rclk_pins - initialize recovered clock pin + * ice_dpll_init_pin_common - initialize pin * @pf: board private structure * @pin: pin to register * @start_idx: on which index shall allocation start in dpll subsystem * @ops: callback ops registered with the pins * - * Allocate resource for recovered clock pin in dpll subsystem. Register the - * pin with the parents it has in the info. Register pin with the pf's main vsi - * netdev. + * Allocate resource for given pin in dpll subsystem. Register the pin with + * the parents it has in the info. * * Return: * * 0 - success * * negative - registration failure reason */ static int -ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin, - int start_idx, const struct dpll_pin_ops *ops) +ice_dpll_init_pin_common(struct ice_pf *pf, struct ice_dpll_pin *pin, + int start_idx, const struct dpll_pin_ops *ops) { - struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int ret, i; - if (WARN_ON((!vsi || !vsi->netdev))) - return -EINVAL; - ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF, - pf->dplls.clock_id); + ret = ice_dpll_get_pins(pf, pin, start_idx, 1, pf->dplls.clock_id); if (ret) return ret; - for (i = 0; i < pf->dplls.rclk.num_parents; i++) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin; - if (!parent) { - ret = -ENODEV; - goto unregister_pins; + + for (i = 0; i < pin->num_parents; i++) { + parent = &pf->dplls.inputs[pin->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) { + if (!ice_dpll_is_fwnode_pin(parent)) { + ret = -ENODEV; + goto unregister_pins; + } + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_info(ice_pf_to_dev(pf), + "Mux pin not registered yet\n"); + continue; + } } - ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin, - ops, &pf->dplls.rclk); + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, ops, pin); if (ret) goto unregister_pins; } - dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); return 0; unregister_pins: while (i) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin; - dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin, - &ice_dpll_rclk_ops, &pf->dplls.rclk); + parent = &pf->dplls.inputs[pin->parent_idx[--i]]; + if (IS_ERR_OR_NULL(parent->pin)) + continue; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, ops, pin); } - ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF); + ice_dpll_release_pins(pin, 1); + return ret; } +/** + * ice_dpll_init_rclk_pin - initialize recovered clock pin + * @pf: board private structure + * @start_idx: on which index shall allocation start in dpll subsystem + * @ops: callback ops registered with the pins + * + * Allocate resource for recovered clock pin in dpll subsystem. Register the + * pin with the parents it has in the info. + * + * Return: + * * 0 - success + * * negative - registration failure reason + */ +static int +ice_dpll_init_rclk_pin(struct ice_pf *pf, int start_idx, + const struct dpll_pin_ops *ops) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + int ret; + + ret = ice_dpll_init_pin_common(pf, &pf->dplls.rclk, start_idx, ops); + if (ret) + return ret; + + dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); + + return 0; +} + +static void +ice_dpll_deinit_fwnode_pin(struct ice_dpll_pin *pin) +{ + unregister_dpll_notifier(&pin->nb); + flush_workqueue(pin->pf->dplls.wq); + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; +} + +static void +ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + int i; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + destroy_workqueue(pf->dplls.wq); +} + /** * ice_dpll_deinit_pins - deinitialize direct pins * @pf: board private structure @@ -3113,6 +3417,8 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) struct ice_dpll *dp = &d->pps; ice_dpll_deinit_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); if (cgu) { ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops, num_inputs); @@ -3127,12 +3433,12 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) &ice_dpll_output_ops, num_outputs); ice_dpll_release_pins(outputs, num_outputs); if (!pf->dplls.generic) { - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, @@ -3141,6 +3447,141 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) } } +static struct fwnode_handle * +ice_dpll_pin_node_get(struct ice_pf *pf, const char *name) +{ + struct fwnode_handle *fwnode = dev_fwnode(ice_pf_to_dev(pf)); + int index; + + index = fwnode_property_match_string(fwnode, "dpll-pin-names", name); + if (index < 0) + return ERR_PTR(-ENOENT); + + return fwnode_find_reference(fwnode, "dpll-pins", index); +} + +static int +ice_dpll_init_fwnode_pin(struct ice_dpll_pin *pin, const char *name) +{ + struct ice_pf *pf = pin->pf; + int ret; + + pin->fwnode = ice_dpll_pin_node_get(pf, name); + if (IS_ERR(pin->fwnode)) { + dev_err(ice_pf_to_dev(pf), + "Failed to find %s firmware node: %pe\n", name, + pin->fwnode); + pin->fwnode = NULL; + return -ENODEV; + } + + dev_dbg(ice_pf_to_dev(pf), "Found fwnode node for %s\n", name); + + pin->pin = fwnode_dpll_pin_find(pin->fwnode, &pin->tracker); + if (IS_ERR_OR_NULL(pin->pin)) { + dev_info(ice_pf_to_dev(pf), + "DPLL pin for %pfwp not registered yet\n", + pin->fwnode); + pin->pin = NULL; + } + + pin->nb.notifier_call = ice_dpll_pin_notify; + ret = register_dpll_notifier(&pin->nb); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to subscribe for DPLL notifications\n"); + + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; + + return ret; + } + + return ret; +} + +/** + * ice_dpll_init_fwnode_pins - initialize pins from device tree + * @pf: board private structure + * @pins: pointer to pins array + * @start_idx: starting index for pins + * @count: number of pins to initialize + * + * Initialize input pins for E825 RCLK support. The parent pins (rclk0, rclk1) + * are expected to be defined by the system firmware (ACPI). This function + * allocates them in the dpll subsystem and stores their indices for later + * registration with the rclk pin. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int +ice_dpll_init_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + char pin_name[8]; + int i, ret; + + pf->dplls.wq = create_singlethread_workqueue("ice_dpll_wq"); + if (!pf->dplls.wq) + return -ENOMEM; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) { + pins[start_idx + i].pf = pf; + snprintf(pin_name, sizeof(pin_name), "rclk%u", i); + ret = ice_dpll_init_fwnode_pin(&pins[start_idx + i], pin_name); + if (ret) + goto error; + } + + return 0; +error: + while (i--) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + + destroy_workqueue(pf->dplls.wq); + + return ret; +} + +/** + * ice_dpll_init_pins_e825 - init pins and register pins with a dplls + * @pf: board private structure + * @cgu: if cgu is present and controlled by this NIC + * + * Initialize directly connected pf's pins within pf's dplls in a Linux dpll + * subsystem. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int ice_dpll_init_pins_e825(struct ice_pf *pf) +{ + int ret; + + ret = ice_dpll_init_fwnode_pins(pf, pf->dplls.inputs, 0); + if (ret) + return ret; + + ret = ice_dpll_init_rclk_pin(pf, DPLL_PIN_IDX_UNSPEC, + &ice_dpll_rclk_ops); + if (ret) { + /* Inform DPLL notifier works that DPLL init was finished + * unsuccessfully (ICE_DPLL_FLAG not set). + */ + complete_all(&pf->dplls.dpll_init); + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); + } + + return ret; +} + /** * ice_dpll_init_pins - init pins and register pins with a dplls * @pf: board private structure @@ -3155,21 +3596,24 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) */ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) { + const struct dpll_pin_ops *output_ops; + const struct dpll_pin_ops *input_ops; int ret, count; + input_ops = &ice_dpll_input_ops; + output_ops = &ice_dpll_output_ops; + ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0, - pf->dplls.num_inputs, - &ice_dpll_input_ops, - pf->dplls.eec.dpll, pf->dplls.pps.dpll); + pf->dplls.num_inputs, input_ops, + pf->dplls.eec.dpll, + pf->dplls.pps.dpll); if (ret) return ret; count = pf->dplls.num_inputs; if (cgu) { ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs, - count, - pf->dplls.num_outputs, - &ice_dpll_output_ops, - pf->dplls.eec.dpll, + count, pf->dplls.num_outputs, + output_ops, pf->dplls.eec.dpll, pf->dplls.pps.dpll); if (ret) goto deinit_inputs; @@ -3205,30 +3649,30 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) } else { count += pf->dplls.num_outputs + 2 * ICE_DPLL_PIN_SW_NUM; } - ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, count + pf->hw.pf_id, - &ice_dpll_rclk_ops); + + ret = ice_dpll_init_rclk_pin(pf, count + pf->ptp.port.port_num, + &ice_dpll_rclk_ops); if (ret) goto deinit_ufl; return 0; deinit_ufl: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_ufl_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_sma: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_sma_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_outputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.outputs, pf->dplls.num_outputs, - &ice_dpll_output_ops, pf->dplls.pps.dpll, + output_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); deinit_inputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs, - &ice_dpll_input_ops, pf->dplls.pps.dpll, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.inputs, + pf->dplls.num_inputs, + input_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); return ret; } @@ -3239,8 +3683,8 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) * @d: pointer to ice_dpll * @cgu: if cgu is present and controlled by this NIC * - * If cgu is owned unregister the dpll from dpll subsystem. - * Release resources of dpll device from dpll subsystem. + * If cgu is owned, unregister the DPL from DPLL subsystem. + * Release resources of DPLL device from DPLL subsystem. */ static void ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) @@ -3257,8 +3701,8 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) * @cgu: if cgu is present and controlled by this NIC * @type: type of dpll being initialized * - * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled - * by this NIC, register dpll with the callback ops. + * Allocate DPLL instance for this board in dpll subsystem, if cgu is controlled + * by this NIC, register DPLL with the callback ops. * * Return: * * 0 - success @@ -3289,6 +3733,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { dpll_device_put(d->dpll, &d->tracker); + d->dpll = NULL; return ret; } d->ops = ops; @@ -3506,6 +3951,26 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf, return ret; } +/** + * ice_dpll_init_info_pin_on_pin_e825c - initializes rclk pin information + * @pf: board private structure + * + * Init information for rclk pin, cache them in pf->dplls.rclk. + * + * Return: + * * 0 - success + */ +static int ice_dpll_init_info_pin_on_pin_e825c(struct ice_pf *pf) +{ + struct ice_dpll_pin *rclk_pin = &pf->dplls.rclk; + + rclk_pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + rclk_pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE; + rclk_pin->pf = pf; + + return 0; +} + /** * ice_dpll_init_info_rclk_pin - initializes rclk pin information * @pf: board private structure @@ -3632,7 +4097,10 @@ ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type) case ICE_DPLL_PIN_TYPE_OUTPUT: return ice_dpll_init_info_direct_pins(pf, pin_type); case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - return ice_dpll_init_info_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + return ice_dpll_init_info_pin_on_pin_e825c(pf); + else + return ice_dpll_init_info_rclk_pin(pf); case ICE_DPLL_PIN_TYPE_SOFTWARE: return ice_dpll_init_info_sw_pins(pf); default: @@ -3654,6 +4122,50 @@ static void ice_dpll_deinit_info(struct ice_pf *pf) kfree(pf->dplls.pps.input_prio); } +/** + * ice_dpll_init_info_e825c - prepare pf's dpll information structure for e825c + * device + * @pf: board private structure + * + * Acquire (from HW) and set basic DPLL information (on pf->dplls struct). + * + * Return: + * * 0 - success + * * negative - init failure reason + */ +static int ice_dpll_init_info_e825c(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int ret = 0; + int i; + + d->clock_id = ice_generate_clock_id(pf); + d->num_inputs = ICE_SYNCE_CLK_NUM; + + d->inputs = kcalloc(d->num_inputs, sizeof(*d->inputs), GFP_KERNEL); + if (!d->inputs) + return -ENOMEM; + + ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx, + &pf->dplls.rclk.num_parents); + if (ret) + goto deinit_info; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i; + + ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT); + if (ret) + goto deinit_info; + dev_dbg(ice_pf_to_dev(pf), + "%s - success, inputs: %u, outputs: %u, rclk-parents: %u\n", + __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents); + return 0; +deinit_info: + ice_dpll_deinit_info(pf); + return ret; +} + /** * ice_dpll_init_info - prepare pf's dpll information structure * @pf: board private structure @@ -3773,14 +4285,16 @@ void ice_dpll_deinit(struct ice_pf *pf) ice_dpll_deinit_worker(pf); ice_dpll_deinit_pins(pf, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.pps.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.eec.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); ice_dpll_deinit_info(pf); mutex_destroy(&pf->dplls.lock); } /** - * ice_dpll_init - initialize support for dpll subsystem + * ice_dpll_init_e825 - initialize support for dpll subsystem * @pf: board private structure * * Set up the device dplls, register them and pins connected within Linux dpll @@ -3789,7 +4303,43 @@ void ice_dpll_deinit(struct ice_pf *pf) * * Context: Initializes pf->dplls.lock mutex. */ -void ice_dpll_init(struct ice_pf *pf) +static void ice_dpll_init_e825(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int err; + + mutex_init(&d->lock); + init_completion(&d->dpll_init); + + err = ice_dpll_init_info_e825c(pf); + if (err) + goto err_exit; + err = ice_dpll_init_pins_e825(pf); + if (err) + goto deinit_info; + set_bit(ICE_FLAG_DPLL, pf->flags); + complete_all(&d->dpll_init); + + return; + +deinit_info: + ice_dpll_deinit_info(pf); +err_exit: + mutex_destroy(&d->lock); + dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); +} + +/** + * ice_dpll_init_e810 - initialize support for dpll subsystem + * @pf: board private structure + * + * Set up the device dplls, register them and pins connected within Linux dpll + * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL + * configuration requests. + * + * Context: Initializes pf->dplls.lock mutex. + */ +static void ice_dpll_init_e810(struct ice_pf *pf) { bool cgu = ice_is_feature_supported(pf, ICE_F_CGU); struct ice_dplls *d = &pf->dplls; @@ -3829,3 +4379,15 @@ void ice_dpll_init(struct ice_pf *pf) mutex_destroy(&d->lock); dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); } + +void ice_dpll_init(struct ice_pf *pf) +{ + switch (pf->hw.mac_type) { + case ICE_MAC_GENERIC_3K_E825: + ice_dpll_init_e825(pf); + break; + default: + ice_dpll_init_e810(pf); + break; + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index 63fac6510df6e..ae42cdea0ee14 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -20,6 +20,12 @@ enum ice_dpll_pin_sw { ICE_DPLL_PIN_SW_NUM }; +struct ice_dpll_pin_work { + struct work_struct work; + unsigned long action; + struct ice_dpll_pin *pin; +}; + /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin @@ -39,6 +45,8 @@ struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; dpll_tracker tracker; + struct fwnode_handle *fwnode; + struct notifier_block nb; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -118,7 +126,9 @@ struct ice_dpll { struct ice_dplls { struct kthread_worker *kworker; struct kthread_delayed_work work; + struct workqueue_struct *wq; struct mutex lock; + struct completion dpll_init; struct ice_dpll eec; struct ice_dpll pps; struct ice_dpll_pin *inputs; @@ -147,3 +157,19 @@ static inline void ice_dpll_deinit(struct ice_pf *pf) { } #endif #endif + +#define ICE_CGU_R10 0x28 +#define ICE_CGU_R10_SYNCE_CLKO_SEL GENMASK(8, 5) +#define ICE_CGU_R10_SYNCE_CLKODIV_M1 GENMASK(13, 9) +#define ICE_CGU_R10_SYNCE_CLKODIV_LOAD BIT(14) +#define ICE_CGU_R10_SYNCE_DCK_RST BIT(15) +#define ICE_CGU_R10_SYNCE_ETHCLKO_SEL GENMASK(18, 16) +#define ICE_CGU_R10_SYNCE_ETHDIV_M1 GENMASK(23, 19) +#define ICE_CGU_R10_SYNCE_ETHDIV_LOAD BIT(24) +#define ICE_CGU_R10_SYNCE_DCK2_RST BIT(25) +#define ICE_CGU_R10_SYNCE_S_REF_CLK GENMASK(31, 27) + +#define ICE_CGU_R11 0x2C +#define ICE_CGU_R11_SYNCE_S_BYP_CLK GENMASK(6, 1) + +#define ICE_CGU_BYPASS_MUX_OFFSET_E825C 3 diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2522ebdea9139..d921269e1fe71 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3989,6 +3989,9 @@ void ice_init_feature_support(struct ice_pf *pf) break; } + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_set_feature_support(pf, ICE_F_PHY_RCLK); + if (pf->hw.mac_type == ICE_MAC_E830) { ice_set_feature_support(pf, ICE_F_MBX_LIMIT); ice_set_feature_support(pf, ICE_F_GCS); diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 4c8d20f2d2c0a..1d26be58e29a0 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1341,6 +1341,38 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup) if (pf->hw.reset_ongoing) return; + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) { + int pin, err; + + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; + + mutex_lock(&pf->dplls.lock); + for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) { + enum ice_synce_clk clk_pin; + bool active; + u8 port_num; + + port_num = ptp_port->port_num; + clk_pin = (enum ice_synce_clk)pin; + err = ice_tspll_bypass_mux_active_e825c(hw, + port_num, + &active, + clk_pin); + if (WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin); + if (active && WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + } + mutex_unlock(&pf->dplls.lock); + } + switch (hw->mac_type) { case ICE_MAC_E810: case ICE_MAC_E830: diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index 35680dbe4a7f7..61c0a0d93ea89 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -5903,7 +5903,14 @@ int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num) *base_idx = SI_REF1P; else ret = -ENODEV; - + break; + case ICE_DEV_ID_E825C_BACKPLANE: + case ICE_DEV_ID_E825C_QSFP: + case ICE_DEV_ID_E825C_SFP: + case ICE_DEV_ID_E825C_SGMII: + *pin_num = ICE_SYNCE_CLK_NUM; + *base_idx = 0; + ret = 0; break; default: ret = -ENODEV; diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.c b/drivers/net/ethernet/intel/ice/ice_tspll.c index 66320a4ab86fd..fd4b58eb9bc00 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.c +++ b/drivers/net/ethernet/intel/ice/ice_tspll.c @@ -624,3 +624,220 @@ int ice_tspll_init(struct ice_hw *hw) return err; } + +/** + * ice_tspll_bypass_mux_active_e825c - check if the given port is set active + * @hw: Pointer to the HW struct + * @port: Number of the port + * @active: Output flag showing if port is active + * @output: Output pin, we have two in E825C + * + * Check if given port is selected as recovered clock source for given output. + * + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output) +{ + u8 active_clk; + u32 val; + int err; + + switch (output) { + case ICE_SYNCE_CLK0: + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, val); + break; + case ICE_SYNCE_CLK1: + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, val); + break; + default: + return -EINVAL; + } + + if (active_clk == port % hw->ptp.ports_per_phy + + ICE_CGU_BYPASS_MUX_OFFSET_E825C) + *active = true; + else + *active = false; + + return 0; +} + +/** + * ice_tspll_cfg_bypass_mux_e825c - configure reference clock mux + * @hw: Pointer to the HW struct + * @ena: true to enable the reference, false if disable + * @port_num: Number of the port + * @output: Output pin, we have two in E825C + * + * Set reference clock source and output clock selection. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output) +{ + u8 first_mux; + int err; + u32 r10; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &r10); + if (err) + return err; + + if (!ena) + first_mux = ICE_CGU_NET_REF_CLK0; + else + first_mux = port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C; + + r10 &= ~(ICE_CGU_R10_SYNCE_DCK_RST | ICE_CGU_R10_SYNCE_DCK2_RST); + + switch (output) { + case ICE_SYNCE_CLK0: + r10 &= ~(ICE_CGU_R10_SYNCE_ETHCLKO_SEL | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD | + ICE_CGU_R10_SYNCE_S_REF_CLK); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_S_REF_CLK, first_mux); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHCLKO_SEL, + ICE_CGU_REF_CLK_BYP0_DIV); + break; + case ICE_SYNCE_CLK1: + { + u32 val; + + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + val &= ~ICE_CGU_R11_SYNCE_S_BYP_CLK; + val |= FIELD_PREP(ICE_CGU_R11_SYNCE_S_BYP_CLK, first_mux); + err = ice_write_cgu_reg(hw, ICE_CGU_R11, val); + if (err) + return err; + r10 &= ~(ICE_CGU_R10_SYNCE_CLKODIV_LOAD | + ICE_CGU_R10_SYNCE_CLKO_SEL); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKO_SEL, + ICE_CGU_REF_CLK_BYP1_DIV); + break; + } + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, r10); + if (err) + return err; + + return 0; +} + +/** + * ice_tspll_get_div_e825c - get the divider for the given speed + * @link_speed: link speed of the port + * @divider: output value, calculated divider + * + * Get CGU divider value based on the link speed. + * + * Return: + * * 0 - success + * * negative - error + */ +static int ice_tspll_get_div_e825c(u16 link_speed, unsigned int *divider) +{ + switch (link_speed) { + case ICE_AQ_LINK_SPEED_100GB: + case ICE_AQ_LINK_SPEED_50GB: + case ICE_AQ_LINK_SPEED_25GB: + *divider = 10; + break; + case ICE_AQ_LINK_SPEED_40GB: + case ICE_AQ_LINK_SPEED_10GB: + *divider = 4; + break; + case ICE_AQ_LINK_SPEED_5GB: + case ICE_AQ_LINK_SPEED_2500MB: + case ICE_AQ_LINK_SPEED_1000MB: + *divider = 2; + break; + case ICE_AQ_LINK_SPEED_100MB: + *divider = 1; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * ice_tspll_cfg_synce_ethdiv_e825c - set the divider on the mux + * @hw: Pointer to the HW struct + * @output: Output pin, we have two in E825C + * + * Set the correct CGU divider for RCLKA or RCLKB. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output) +{ + unsigned int divider; + u16 link_speed; + u32 val; + int err; + + link_speed = hw->port_info->phy.link_info.link_speed; + if (!link_speed) + return 0; + + err = ice_tspll_get_div_e825c(link_speed, &divider); + if (err) + return err; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + + /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */ + switch (output) { + case ICE_SYNCE_CLK0: + val &= ~(ICE_CGU_R10_SYNCE_ETHDIV_M1 | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHDIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_ETHDIV_LOAD; + break; + case ICE_SYNCE_CLK1: + val &= ~(ICE_CGU_R10_SYNCE_CLKODIV_M1 | + ICE_CGU_R10_SYNCE_CLKODIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKODIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_CLKODIV_LOAD; + break; + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.h b/drivers/net/ethernet/intel/ice/ice_tspll.h index c0b1232cc07c3..d650867004d1f 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.h +++ b/drivers/net/ethernet/intel/ice/ice_tspll.h @@ -21,11 +21,22 @@ struct ice_tspll_params_e82x { u32 frac_n_div; }; +#define ICE_CGU_NET_REF_CLK0 0x0 +#define ICE_CGU_REF_CLK_BYP0 0x5 +#define ICE_CGU_REF_CLK_BYP0_DIV 0x0 +#define ICE_CGU_REF_CLK_BYP1 0x4 +#define ICE_CGU_REF_CLK_BYP1_DIV 0x1 + #define ICE_TSPLL_CK_REFCLKFREQ_E825 0x1F #define ICE_TSPLL_NDIVRATIO_E825 5 #define ICE_TSPLL_FBDIV_INTGR_E825 256 int ice_tspll_cfg_pps_out_e825c(struct ice_hw *hw, bool enable); int ice_tspll_init(struct ice_hw *hw); - +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output); +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output); +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output); #endif /* _ICE_TSPLL_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 6a2ec8389a8f3..1e82f4c40b326 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -349,6 +349,12 @@ enum ice_clk_src { NUM_ICE_CLK_SRC }; +enum ice_synce_clk { + ICE_SYNCE_CLK0, + ICE_SYNCE_CLK1, + ICE_SYNCE_CLK_NUM +}; + struct ice_ts_func_info { /* Function specific info */ enum ice_tspll_freq time_ref; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:38 +0100", "thread_id": "20260202171638.17427-5-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Move struct bpf_struct_ops_link's definition into bpf.h, where other custom bpf links definitions are. It's necessary to access its members from outside of generic bpf_struct_ops implementation, which will be done by following patches in the series. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/bpf.h | 6 ++++++ kernel/bpf/bpf_struct_ops.c | 6 ------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4427c6e98331..899dd911dc82 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1891,6 +1891,12 @@ struct bpf_raw_tp_link { u64 cookie; }; +struct bpf_struct_ops_link { + struct bpf_link link; + struct bpf_map __rcu *map; + wait_queue_head_t wait_hup; +}; + struct bpf_link_primer { struct bpf_link *link; struct file *file; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index c43346cb3d76..de01cf3025b3 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -55,12 +55,6 @@ struct bpf_struct_ops_map { struct bpf_struct_ops_value kvalue; }; -struct bpf_struct_ops_link { - struct bpf_link link; - struct bpf_map __rcu *map; - wait_queue_head_t wait_hup; -}; - static DEFINE_MUTEX(update_mutex); #define VALUE_PREFIX "bpf_struct_ops_" -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:04 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce an ability to attach bpf struct_ops'es to cgroups. a user passes a BPF_F_CGROUP_FD flag and specifies the target cgroup fd while creating a struct_ops link. As the result, the bpf struct_ops link will be created and attached to a cgroup. The cgroup.bpf structure maintains a list of attached struct ops links. If the cgroup is getting deleted, attached struct ops'es are getting auto-detached and the userspace program gets a notification. This change doesn't answer the question how bpf programs belonging to these struct ops'es will be executed. It will be done individually for every bpf struct ops which supports this. Please, note that unlike "normal" bpf programs, struct ops'es are not propagated to cgroup sub-trees. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/bpf-cgroup-defs.h | 3 ++ include/linux/bpf-cgroup.h | 16 +++++++++ include/linux/bpf.h | 3 ++ include/uapi/linux/bpf.h | 3 ++ kernel/bpf/bpf_struct_ops.c | 59 ++++++++++++++++++++++++++++++--- kernel/bpf/cgroup.c | 46 +++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + 7 files changed, 127 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h index c9e6b26abab6..6c5e37190dad 100644 --- a/include/linux/bpf-cgroup-defs.h +++ b/include/linux/bpf-cgroup-defs.h @@ -71,6 +71,9 @@ struct cgroup_bpf { /* temp storage for effective prog array used by prog_attach/detach */ struct bpf_prog_array *inactive; + /* list of bpf struct ops links */ + struct list_head struct_ops_links; + /* reference counter used to detach bpf programs after cgroup removal */ struct percpu_ref refcnt; diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h index 2f535331f926..a6c327257006 100644 --- a/include/linux/bpf-cgroup.h +++ b/include/linux/bpf-cgroup.h @@ -423,6 +423,11 @@ int cgroup_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); int cgroup_bpf_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr); +int cgroup_bpf_attach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link); +void cgroup_bpf_detach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link); + const struct bpf_func_proto * cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog); #else @@ -451,6 +456,17 @@ static inline int cgroup_bpf_link_attach(const union bpf_attr *attr, return -EINVAL; } +static inline int cgroup_bpf_attach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ + return -EINVAL; +} + +static inline void cgroup_bpf_detach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ +} + static inline int cgroup_bpf_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr) { diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 899dd911dc82..391888eb257c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1894,6 +1894,9 @@ struct bpf_raw_tp_link { struct bpf_struct_ops_link { struct bpf_link link; struct bpf_map __rcu *map; + struct cgroup *cgroup; + bool cgroup_removed; + struct list_head list; wait_queue_head_t wait_hup; }; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 44e7dbc278e3..28544e8af1cd 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1237,6 +1237,7 @@ enum bpf_perf_event_type { #define BPF_F_AFTER (1U << 4) #define BPF_F_ID (1U << 5) #define BPF_F_PREORDER (1U << 6) +#define BPF_F_CGROUP_FD (1U << 7) #define BPF_F_LINK BPF_F_LINK /* 1 << 13 */ /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the @@ -6775,6 +6776,8 @@ struct bpf_link_info { } xdp; struct { __u32 map_id; + __u32 :32; + __u64 cgroup_id; } struct_ops; struct { __u32 pf; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index de01cf3025b3..2e361e22cfa0 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -13,6 +13,8 @@ #include <linux/btf_ids.h> #include <linux/rcupdate_wait.h> #include <linux/poll.h> +#include <linux/bpf-cgroup.h> +#include <linux/cgroup.h> struct bpf_struct_ops_value { struct bpf_struct_ops_common_value common; @@ -1220,6 +1222,10 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link) st_map->st_ops_desc->st_ops->unreg(&st_map->kvalue.data, link); bpf_map_put(&st_map->map); } + + if (st_link->cgroup) + cgroup_bpf_detach_struct_ops(st_link->cgroup, st_link); + kfree(st_link); } @@ -1228,6 +1234,7 @@ static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, { struct bpf_struct_ops_link *st_link; struct bpf_map *map; + u64 cgrp_id = 0; st_link = container_of(link, struct bpf_struct_ops_link, link); rcu_read_lock(); @@ -1235,6 +1242,14 @@ static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, if (map) seq_printf(seq, "map_id:\t%d\n", map->id); rcu_read_unlock(); + + cgroup_lock(); + if (st_link->cgroup) + cgrp_id = cgroup_id(st_link->cgroup); + cgroup_unlock(); + + if (cgrp_id) + seq_printf(seq, "cgroup_id:\t%llu\n", cgrp_id); } static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, @@ -1242,6 +1257,7 @@ static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, { struct bpf_struct_ops_link *st_link; struct bpf_map *map; + u64 cgrp_id = 0; st_link = container_of(link, struct bpf_struct_ops_link, link); rcu_read_lock(); @@ -1249,6 +1265,13 @@ static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, if (map) info->struct_ops.map_id = map->id; rcu_read_unlock(); + + cgroup_lock(); + if (st_link->cgroup) + cgrp_id = cgroup_id(st_link->cgroup); + cgroup_unlock(); + + info->struct_ops.cgroup_id = cgrp_id; return 0; } @@ -1327,6 +1350,9 @@ static int bpf_struct_ops_map_link_detach(struct bpf_link *link) mutex_unlock(&update_mutex); + if (st_link->cgroup) + cgroup_bpf_detach_struct_ops(st_link->cgroup, st_link); + wake_up_interruptible_poll(&st_link->wait_hup, EPOLLHUP); return 0; @@ -1339,6 +1365,9 @@ static __poll_t bpf_struct_ops_map_link_poll(struct file *file, poll_wait(file, &st_link->wait_hup, pts); + if (st_link->cgroup_removed) + return EPOLLHUP; + return rcu_access_pointer(st_link->map) ? 0 : EPOLLHUP; } @@ -1357,8 +1386,12 @@ int bpf_struct_ops_link_create(union bpf_attr *attr) struct bpf_link_primer link_primer; struct bpf_struct_ops_map *st_map; struct bpf_map *map; + struct cgroup *cgrp; int err; + if (attr->link_create.flags & ~BPF_F_CGROUP_FD) + return -EINVAL; + map = bpf_map_get(attr->link_create.map_fd); if (IS_ERR(map)) return PTR_ERR(map); @@ -1378,11 +1411,26 @@ int bpf_struct_ops_link_create(union bpf_attr *attr) bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_map_lops, NULL, attr->link_create.attach_type); + init_waitqueue_head(&link->wait_hup); + + if (attr->link_create.flags & BPF_F_CGROUP_FD) { + cgrp = cgroup_get_from_fd(attr->link_create.target_fd); + if (IS_ERR(cgrp)) { + err = PTR_ERR(cgrp); + goto err_out; + } + link->cgroup = cgrp; + err = cgroup_bpf_attach_struct_ops(cgrp, link); + if (err) { + cgroup_put(cgrp); + link->cgroup = NULL; + goto err_out; + } + } + err = bpf_link_prime(&link->link, &link_primer); if (err) - goto err_out; - - init_waitqueue_head(&link->wait_hup); + goto err_put_cgroup; /* Hold the update_mutex such that the subsystem cannot * do link->ops->detach() before the link is fully initialized. @@ -1393,13 +1441,16 @@ int bpf_struct_ops_link_create(union bpf_attr *attr) mutex_unlock(&update_mutex); bpf_link_cleanup(&link_primer); link = NULL; - goto err_out; + goto err_put_cgroup; } RCU_INIT_POINTER(link->map, map); mutex_unlock(&update_mutex); return bpf_link_settle(&link_primer); +err_put_cgroup: + if (link && link->cgroup) + cgroup_bpf_detach_struct_ops(link->cgroup, link); err_out: bpf_map_put(map); kfree(link); diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 69988af44b37..7b1903be6f69 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -16,6 +16,7 @@ #include <linux/bpf-cgroup.h> #include <linux/bpf_lsm.h> #include <linux/bpf_verifier.h> +#include <linux/poll.h> #include <net/sock.h> #include <net/bpf_sk_storage.h> @@ -307,12 +308,23 @@ static void cgroup_bpf_release(struct work_struct *work) bpf.release_work); struct bpf_prog_array *old_array; struct list_head *storages = &cgrp->bpf.storages; + struct bpf_struct_ops_link *st_link, *st_tmp; struct bpf_cgroup_storage *storage, *stmp; + LIST_HEAD(st_links); unsigned int atype; cgroup_lock(); + list_splice_init(&cgrp->bpf.struct_ops_links, &st_links); + list_for_each_entry_safe(st_link, st_tmp, &st_links, list) { + st_link->cgroup = NULL; + st_link->cgroup_removed = true; + cgroup_put(cgrp); + if (IS_ERR(bpf_link_inc_not_zero(&st_link->link))) + list_del(&st_link->list); + } + for (atype = 0; atype < ARRAY_SIZE(cgrp->bpf.progs); atype++) { struct hlist_head *progs = &cgrp->bpf.progs[atype]; struct bpf_prog_list *pl; @@ -346,6 +358,11 @@ static void cgroup_bpf_release(struct work_struct *work) cgroup_unlock(); + list_for_each_entry_safe(st_link, st_tmp, &st_links, list) { + st_link->link.ops->detach(&st_link->link); + bpf_link_put(&st_link->link); + } + for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) cgroup_bpf_put(p); @@ -525,6 +542,7 @@ static int cgroup_bpf_inherit(struct cgroup *cgrp) INIT_HLIST_HEAD(&cgrp->bpf.progs[i]); INIT_LIST_HEAD(&cgrp->bpf.storages); + INIT_LIST_HEAD(&cgrp->bpf.struct_ops_links); for (i = 0; i < NR; i++) if (compute_effective_progs(cgrp, i, &arrays[i])) @@ -2759,3 +2777,31 @@ cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return NULL; } } + +int cgroup_bpf_attach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ + int ret = 0; + + cgroup_lock(); + if (percpu_ref_is_zero(&cgrp->bpf.refcnt)) { + ret = -EBUSY; + goto out; + } + list_add_tail(&link->list, &cgrp->bpf.struct_ops_links); +out: + cgroup_unlock(); + return ret; +} + +void cgroup_bpf_detach_struct_ops(struct cgroup *cgrp, + struct bpf_struct_ops_link *link) +{ + cgroup_lock(); + if (link->cgroup == cgrp) { + list_del(&link->list); + link->cgroup = NULL; + cgroup_put(cgrp); + } + cgroup_unlock(); +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 3ca7d76e05f0..d5492e60744a 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1237,6 +1237,7 @@ enum bpf_perf_event_type { #define BPF_F_AFTER (1U << 4) #define BPF_F_ID (1U << 5) #define BPF_F_PREORDER (1U << 6) +#define BPF_F_CGROUP_FD (1U << 7) #define BPF_F_LINK BPF_F_LINK /* 1 << 13 */ /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:05 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
bpf_map__attach_struct_ops() returns -EINVAL instead of -ENOMEM on the memory allocation failure. Fix it. Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support") Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- tools/lib/bpf/libbpf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 0c8bf0b5cce4..46d2762f5993 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -13480,7 +13480,7 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) link = calloc(1, sizeof(*link)); if (!link) - return libbpf_err_ptr(-EINVAL); + return libbpf_err_ptr(-ENOMEM); /* kern_vdata should be prepared during the loading phase. */ err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0); -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:06 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce bpf_map__attach_struct_ops_opts(), an extended version of bpf_map__attach_struct_ops(), which takes additional struct bpf_struct_ops_opts argument. This allows to pass a target_fd argument and the BPF_F_CGROUP_FD flag and attach the struct ops to a cgroup as a result. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- tools/lib/bpf/libbpf.c | 20 +++++++++++++++++--- tools/lib/bpf/libbpf.h | 14 ++++++++++++++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 32 insertions(+), 3 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 46d2762f5993..9ba67089bf9d 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -13462,11 +13462,18 @@ static int bpf_link__detach_struct_ops(struct bpf_link *link) return close(link->fd); } -struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) +struct bpf_link *bpf_map__attach_struct_ops_opts(const struct bpf_map *map, + const struct bpf_struct_ops_opts *opts) { + DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_opts); struct bpf_link_struct_ops *link; + int err, fd, target_fd; __u32 zero = 0; - int err, fd; + + if (!OPTS_VALID(opts, bpf_struct_ops_opts)) { + pr_warn("map '%s': invalid opts\n", map->name); + return libbpf_err_ptr(-EINVAL); + } if (!bpf_map__is_struct_ops(map)) { pr_warn("map '%s': can't attach non-struct_ops map\n", map->name); @@ -13503,7 +13510,9 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) return &link->link; } - fd = bpf_link_create(map->fd, 0, BPF_STRUCT_OPS, NULL); + link_opts.flags = OPTS_GET(opts, flags, 0); + target_fd = OPTS_GET(opts, target_fd, 0); + fd = bpf_link_create(map->fd, target_fd, BPF_STRUCT_OPS, &link_opts); if (fd < 0) { free(link); return libbpf_err_ptr(fd); @@ -13515,6 +13524,11 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) return &link->link; } +struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) +{ + return bpf_map__attach_struct_ops_opts(map, NULL); +} + /* * Swap the back struct_ops of a link with a new struct_ops map. */ diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index dfc37a615578..2c28cf80e7fe 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -920,6 +920,20 @@ bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd, struct bpf_map; LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map); + +struct bpf_struct_ops_opts { + /* size of this struct, for forward/backward compatibility */ + size_t sz; + __u32 flags; + __u32 target_fd; + __u64 expected_revision; + size_t :0; +}; +#define bpf_struct_ops_opts__last_field expected_revision + +LIBBPF_API struct bpf_link * +bpf_map__attach_struct_ops_opts(const struct bpf_map *map, + const struct bpf_struct_ops_opts *opts); LIBBPF_API int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map); struct bpf_iter_attach_opts { diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index d18fbcea7578..4779190c97b6 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -454,4 +454,5 @@ LIBBPF_1.7.0 { bpf_prog_assoc_struct_ops; bpf_program__assoc_struct_ops; btf__permute; + bpf_map__attach_struct_ops_opts; } LIBBPF_1.6.0; -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:07 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Struct oom_control is used to describe the OOM context. It's memcg field defines the scope of OOM: it's NULL for global OOMs and a valid memcg pointer for memcg-scoped OOMs. Teach bpf verifier to recognize it as trusted or NULL pointer. It will provide the bpf OOM handler a trusted memcg pointer, which for example is required for iterating the memcg's subtree. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> --- kernel/bpf/verifier.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c2f2650db9fd..cca36edb460d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7242,6 +7242,10 @@ BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct vm_area_struct) { struct file *vm_file; }; +BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct oom_control) { + struct mem_cgroup *memcg; +}; + static bool type_is_rcu(struct bpf_verifier_env *env, struct bpf_reg_state *reg, const char *field_name, u32 btf_id) @@ -7284,6 +7288,7 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env, BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct socket)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct dentry)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct vm_area_struct)); + BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct oom_control)); return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_trusted_or_null"); -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:08 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
mem_cgroup_get_from_ino() can be reused by the BPF OOM implementation, but currently depends on CONFIG_SHRINKER_DEBUG. Remove this dependency. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> --- include/linux/memcontrol.h | 4 ++-- mm/memcontrol.c | 2 -- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 229ac9835adb..f3b8c71870d8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -833,9 +833,9 @@ static inline unsigned long mem_cgroup_ino(struct mem_cgroup *memcg) { return memcg ? cgroup_ino(memcg->css.cgroup) : 0; } +#endif struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino); -#endif static inline struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m) { @@ -1298,12 +1298,12 @@ static inline unsigned long mem_cgroup_ino(struct mem_cgroup *memcg) { return 0; } +#endif static inline struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino) { return NULL; } -#endif static inline struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3808845bc8cc..1f74fce27677 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3658,7 +3658,6 @@ struct mem_cgroup *mem_cgroup_from_id(unsigned short id) return xa_load(&mem_cgroup_ids, id); } -#ifdef CONFIG_SHRINKER_DEBUG struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino) { struct cgroup *cgrp; @@ -3679,7 +3678,6 @@ struct mem_cgroup *mem_cgroup_get_from_ino(unsigned long ino) return memcg; } -#endif static void free_mem_cgroup_per_node_info(struct mem_cgroup_per_node *pn) { -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:09 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce a bpf struct ops for implementing custom OOM handling policies. It's possible to load one bpf_oom_ops for the system and one bpf_oom_ops for every memory cgroup. In case of a memcg OOM, the cgroup tree is traversed from the OOM'ing memcg up to the root and corresponding BPF OOM handlers are executed until some memory is freed. If no memory is freed, the kernel OOM killer is invoked. The struct ops provides the bpf_handle_out_of_memory() callback, which expected to return 1 if it was able to free some memory and 0 otherwise. If 1 is returned, the kernel also checks the bpf_memory_freed field of the oom_control structure, which is expected to be set by kfuncs suitable for releasing memory (which will be introduced later in the patch series). If both are set, OOM is considered handled, otherwise the next OOM handler in the chain is executed: e.g. BPF OOM attached to the parent cgroup or the kernel OOM killer. The bpf_handle_out_of_memory() callback program is sleepable to allow using iterators, e.g. cgroup iterators. The callback receives struct oom_control as an argument, so it can determine the scope of the OOM event: if this is a memcg-wide or system-wide OOM. It also receives bpf_struct_ops_link as the second argument, so it can detect the cgroup level at which this specific instance is attached. The bpf_handle_out_of_memory() callback is executed just before the kernel victim task selection algorithm, so all heuristics and sysctls like panic on oom, sysctl_oom_kill_allocating_task and sysctl_oom_kill_allocating_task are respected. The struct ops has the name field, which allows to define a custom name for the implemented policy. It's printed in the OOM report in the oom_handler=<name> format only if a bpf handler is invoked. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 3 + include/linux/bpf.h | 1 + include/linux/bpf_oom.h | 46 ++++++++ include/linux/oom.h | 8 ++ kernel/bpf/bpf_struct_ops.c | 12 +- mm/Makefile | 2 +- mm/bpf_oom.c | 192 ++++++++++++++++++++++++++++++++ mm/oom_kill.c | 19 ++++ 9 files changed, 282 insertions(+), 3 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 mm/bpf_oom.c diff --git a/MAINTAINERS b/MAINTAINERS index 491d567f7dc8..53465570c1e5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4807,7 +4807,9 @@ M: Shakeel Butt <shakeel.butt@linux.dev> L: bpf@vger.kernel.org L: linux-mm@kvack.org S: Maintained +F: include/linux/bpf_oom.h F: mm/bpf_memcontrol.c +F: mm/bpf_oom.c BPF [MISC] L: bpf@vger.kernel.org diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h index 6c5e37190dad..52395834ce13 100644 --- a/include/linux/bpf-cgroup-defs.h +++ b/include/linux/bpf-cgroup-defs.h @@ -74,6 +74,9 @@ struct cgroup_bpf { /* list of bpf struct ops links */ struct list_head struct_ops_links; + /* BPF OOM struct ops link */ + struct bpf_struct_ops_link __rcu *bpf_oom_link; + /* reference counter used to detach bpf programs after cgroup removal */ struct percpu_ref refcnt; diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 391888eb257c..a5cee5a657b0 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -3944,6 +3944,7 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog) int bpf_prog_get_file_line(struct bpf_prog *prog, unsigned long ip, const char **filep, const char **linep, int *nump); struct bpf_prog *bpf_prog_find_from_stack(void); +void *bpf_struct_ops_data(struct bpf_map *map); int bpf_insn_array_init(struct bpf_map *map, const struct bpf_prog *prog); int bpf_insn_array_ready(struct bpf_map *map); diff --git a/include/linux/bpf_oom.h b/include/linux/bpf_oom.h new file mode 100644 index 000000000000..c81133145c50 --- /dev/null +++ b/include/linux/bpf_oom.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ + +#ifndef __BPF_OOM_H +#define __BPF_OOM_H + +struct oom_control; + +#define BPF_OOM_NAME_MAX_LEN 64 + +struct bpf_oom_ops { + /** + * @handle_out_of_memory: Out of memory bpf handler, called before + * the in-kernel OOM killer. + * @oc: OOM control structure + * @st_link: struct ops link + * + * Should return 1 if some memory was freed up, otherwise + * the in-kernel OOM killer is invoked. + */ + int (*handle_out_of_memory)(struct oom_control *oc, + struct bpf_struct_ops_link *st_link); + + /** + * @name: BPF OOM policy name + */ + char name[BPF_OOM_NAME_MAX_LEN]; +}; + +#ifdef CONFIG_BPF_SYSCALL +/** + * @bpf_handle_oom: handle out of memory condition using bpf + * @oc: OOM control structure + * + * Returns true if some memory was freed. + */ +bool bpf_handle_oom(struct oom_control *oc); + +#else /* CONFIG_BPF_SYSCALL */ +static inline bool bpf_handle_oom(struct oom_control *oc) +{ + return false; +} + +#endif /* CONFIG_BPF_SYSCALL */ + +#endif /* __BPF_OOM_H */ diff --git a/include/linux/oom.h b/include/linux/oom.h index 7b02bc1d0a7e..c2dce336bcb4 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -51,6 +51,14 @@ struct oom_control { /* Used to print the constraint info. */ enum oom_constraint constraint; + +#ifdef CONFIG_BPF_SYSCALL + /* Used by the bpf oom implementation to mark the forward progress */ + bool bpf_memory_freed; + + /* Handler name */ + const char *bpf_handler_name; +#endif }; extern struct mutex oom_lock; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 2e361e22cfa0..6285a6d56b98 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -1009,7 +1009,7 @@ static void bpf_struct_ops_map_free(struct bpf_map *map) * in the tramopline image to finish before releasing * the trampoline image. */ - synchronize_rcu_mult(call_rcu, call_rcu_tasks); + synchronize_rcu_mult(call_rcu, call_rcu_tasks, call_rcu_tasks_trace); __bpf_struct_ops_map_free(map); } @@ -1226,7 +1226,8 @@ static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link) if (st_link->cgroup) cgroup_bpf_detach_struct_ops(st_link->cgroup, st_link); - kfree(st_link); + synchronize_rcu_tasks_trace(); + kfree_rcu(st_link, link.rcu); } static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, @@ -1535,3 +1536,10 @@ void bpf_map_struct_ops_info_fill(struct bpf_map_info *info, struct bpf_map *map info->btf_vmlinux_id = btf_obj_id(st_map->btf); } + +void *bpf_struct_ops_data(struct bpf_map *map) +{ + struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; + + return &st_map->kvalue.data; +} diff --git a/mm/Makefile b/mm/Makefile index bf46fe31dc14..e939525ba01b 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -107,7 +107,7 @@ ifdef CONFIG_SWAP obj-$(CONFIG_MEMCG) += swap_cgroup.o endif ifdef CONFIG_BPF_SYSCALL -obj-$(CONFIG_MEMCG) += bpf_memcontrol.o +obj-$(CONFIG_MEMCG) += bpf_memcontrol.o bpf_oom.o endif obj-$(CONFIG_CGROUP_HUGETLB) += hugetlb_cgroup.o obj-$(CONFIG_GUP_TEST) += gup_test.o diff --git a/mm/bpf_oom.c b/mm/bpf_oom.c new file mode 100644 index 000000000000..ea70be6e2c26 --- /dev/null +++ b/mm/bpf_oom.c @@ -0,0 +1,192 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * BPF-driven OOM killer customization + * + * Author: Roman Gushchin <roman.gushchin@linux.dev> + */ + +#include <linux/bpf.h> +#include <linux/oom.h> +#include <linux/bpf_oom.h> +#include <linux/bpf-cgroup.h> +#include <linux/cgroup.h> +#include <linux/memcontrol.h> +#include <linux/uaccess.h> + +static int bpf_ops_handle_oom(struct bpf_oom_ops *bpf_oom_ops, + struct bpf_struct_ops_link *st_link, + struct oom_control *oc) +{ + int ret; + + oc->bpf_handler_name = &bpf_oom_ops->name[0]; + oc->bpf_memory_freed = false; + pagefault_disable(); + ret = bpf_oom_ops->handle_out_of_memory(oc, st_link); + pagefault_enable(); + oc->bpf_handler_name = NULL; + + return ret; +} + +bool bpf_handle_oom(struct oom_control *oc) +{ + struct bpf_struct_ops_link *st_link; + struct bpf_oom_ops *bpf_oom_ops; + struct mem_cgroup *memcg; + struct bpf_map *map; + int ret = 0; + + /* + * System-wide OOMs are handled by the struct ops attached + * to the root memory cgroup + */ + memcg = oc->memcg ? oc->memcg : root_mem_cgroup; + + rcu_read_lock_trace(); + + /* Find the nearest bpf_oom_ops traversing the cgroup tree upwards */ + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + st_link = rcu_dereference_check(memcg->css.cgroup->bpf.bpf_oom_link, + rcu_read_lock_trace_held()); + if (!st_link) + continue; + + map = rcu_dereference_check((st_link->map), + rcu_read_lock_trace_held()); + if (!map) + continue; + + /* Call BPF OOM handler */ + bpf_oom_ops = bpf_struct_ops_data(map); + ret = bpf_ops_handle_oom(bpf_oom_ops, st_link, oc); + if (ret && oc->bpf_memory_freed) + break; + ret = 0; + } + + rcu_read_unlock_trace(); + + return ret && oc->bpf_memory_freed; +} + +static int __handle_out_of_memory(struct oom_control *oc, + struct bpf_struct_ops_link *st_link) +{ + return 0; +} + +static struct bpf_oom_ops __bpf_oom_ops = { + .handle_out_of_memory = __handle_out_of_memory, +}; + +static const struct bpf_func_proto * +bpf_oom_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) +{ + return tracing_prog_func_proto(func_id, prog); +} + +static bool bpf_oom_ops_is_valid_access(int off, int size, + enum bpf_access_type type, + const struct bpf_prog *prog, + struct bpf_insn_access_aux *info) +{ + return bpf_tracing_btf_ctx_access(off, size, type, prog, info); +} + +static const struct bpf_verifier_ops bpf_oom_verifier_ops = { + .get_func_proto = bpf_oom_func_proto, + .is_valid_access = bpf_oom_ops_is_valid_access, +}; + +static int bpf_oom_ops_reg(void *kdata, struct bpf_link *link) +{ + struct bpf_struct_ops_link *st_link = (struct bpf_struct_ops_link *)link; + struct cgroup *cgrp; + + /* The link is not yet fully initialized, but cgroup should be set */ + if (!link) + return -EOPNOTSUPP; + + cgrp = st_link->cgroup; + if (!cgrp) + return -EINVAL; + + if (cmpxchg(&cgrp->bpf.bpf_oom_link, NULL, st_link)) + return -EEXIST; + + return 0; +} + +static void bpf_oom_ops_unreg(void *kdata, struct bpf_link *link) +{ + struct bpf_struct_ops_link *st_link = (struct bpf_struct_ops_link *)link; + struct cgroup *cgrp; + + if (!link) + return; + + cgrp = st_link->cgroup; + if (!cgrp) + return; + + WARN_ON(cmpxchg(&cgrp->bpf.bpf_oom_link, st_link, NULL) != st_link); +} + +static int bpf_oom_ops_check_member(const struct btf_type *t, + const struct btf_member *member, + const struct bpf_prog *prog) +{ + u32 moff = __btf_member_bit_offset(t, member) / 8; + + switch (moff) { + case offsetof(struct bpf_oom_ops, handle_out_of_memory): + if (!prog) + return -EINVAL; + break; + } + + return 0; +} + +static int bpf_oom_ops_init_member(const struct btf_type *t, + const struct btf_member *member, + void *kdata, const void *udata) +{ + const struct bpf_oom_ops *uops = udata; + struct bpf_oom_ops *ops = kdata; + u32 moff = __btf_member_bit_offset(t, member) / 8; + + switch (moff) { + case offsetof(struct bpf_oom_ops, name): + if (uops->name[0]) + strscpy_pad(ops->name, uops->name, sizeof(ops->name)); + else + strscpy_pad(ops->name, "bpf_defined_policy"); + return 1; + } + return 0; +} + +static int bpf_oom_ops_init(struct btf *btf) +{ + return 0; +} + +static struct bpf_struct_ops bpf_oom_bpf_ops = { + .verifier_ops = &bpf_oom_verifier_ops, + .reg = bpf_oom_ops_reg, + .unreg = bpf_oom_ops_unreg, + .check_member = bpf_oom_ops_check_member, + .init_member = bpf_oom_ops_init_member, + .init = bpf_oom_ops_init, + .name = "bpf_oom_ops", + .owner = THIS_MODULE, + .cfi_stubs = &__bpf_oom_ops +}; + +static int __init bpf_oom_struct_ops_init(void) +{ + return register_bpf_struct_ops(&bpf_oom_bpf_ops, bpf_oom_ops); +} +late_initcall(bpf_oom_struct_ops_init); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 5eb11fbba704..44bbcf033804 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -45,6 +45,7 @@ #include <linux/mmu_notifier.h> #include <linux/cred.h> #include <linux/nmi.h> +#include <linux/bpf_oom.h> #include <asm/tlb.h> #include "internal.h" @@ -246,6 +247,15 @@ static const char * const oom_constraint_text[] = { [CONSTRAINT_MEMCG] = "CONSTRAINT_MEMCG", }; +static const char *oom_handler_name(struct oom_control *oc) +{ +#ifdef CONFIG_BPF_SYSCALL + if (oc->bpf_handler_name) + return oc->bpf_handler_name; +#endif + return NULL; +} + /* * Determine the type of allocation constraint. */ @@ -461,6 +471,8 @@ static void dump_header(struct oom_control *oc) pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), order=%d, oom_score_adj=%hd\n", current->comm, oc->gfp_mask, &oc->gfp_mask, oc->order, current->signal->oom_score_adj); + if (oom_handler_name(oc)) + pr_warn("oom bpf handler: %s\n", oom_handler_name(oc)); if (!IS_ENABLED(CONFIG_COMPACTION) && oc->order) pr_warn("COMPACTION is disabled!!!\n"); @@ -1168,6 +1180,13 @@ bool out_of_memory(struct oom_control *oc) return true; } + /* + * Let bpf handle the OOM first. If it was able to free up some memory, + * bail out. Otherwise fall back to the kernel OOM killer. + */ + if (bpf_handle_oom(oc)) + return true; + select_bad_process(oc); /* Found nothing?!?! */ if (!oc->chosen) { -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:10 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce bpf_oom_kill_process() bpf kfunc, which is supposed to be used by BPF OOM programs. It allows to kill a process in exactly the same way the OOM killer does: using the OOM reaper, bumping corresponding memcg and global statistics, respecting memory.oom.group etc. On success, it sets the oom_control's bpf_memory_freed field to true, enabling the bpf program to bypass the kernel OOM killer. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- mm/oom_kill.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 44bbcf033804..09897597907f 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -46,6 +46,7 @@ #include <linux/cred.h> #include <linux/nmi.h> #include <linux/bpf_oom.h> +#include <linux/btf.h> #include <asm/tlb.h> #include "internal.h" @@ -1290,3 +1291,82 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags) return -ENOSYS; #endif /* CONFIG_MMU */ } + +#ifdef CONFIG_BPF_SYSCALL + +__bpf_kfunc_start_defs(); +/** + * bpf_oom_kill_process - Kill a process as OOM killer + * @oc: pointer to oom_control structure, describes OOM context + * @task: task to be killed + * @message__str: message to print in dmesg + * + * Kill a process in a way similar to the kernel OOM killer. + * This means dump the necessary information to dmesg, adjust memcg + * statistics, leverage the oom reaper, respect memory.oom.group etc. + * + * bpf_oom_kill_process() marks the forward progress by setting + * oc->bpf_memory_freed. If the progress was made, the bpf program + * is free to decide if the kernel oom killer should be invoked. + * Otherwise it's enforced, so that a bad bpf program can't + * deadlock the machine on memory. + */ +__bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc, + struct task_struct *task, + const char *message__str) +{ + if (oom_unkillable_task(task)) + return -EPERM; + + if (task->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) + return -EINVAL; + + /* paired with put_task_struct() in oom_kill_process() */ + get_task_struct(task); + + oc->chosen = task; + + oom_kill_process(oc, message__str); + + oc->chosen = NULL; + oc->bpf_memory_freed = true; + + return 0; +} + +__bpf_kfunc_end_defs(); + +BTF_KFUNCS_START(bpf_oom_kfuncs) +BTF_ID_FLAGS(func, bpf_oom_kill_process, KF_SLEEPABLE) +BTF_KFUNCS_END(bpf_oom_kfuncs) + +BTF_ID_LIST_SINGLE(bpf_oom_ops_ids, struct, bpf_oom_ops) + +static int bpf_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id) +{ + if (prog->type != BPF_PROG_TYPE_STRUCT_OPS || + prog->aux->attach_btf_id != bpf_oom_ops_ids[0]) + return -EACCES; + return 0; +} + +static const struct btf_kfunc_id_set bpf_oom_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_oom_kfuncs, + .filter = bpf_oom_kfunc_filter, +}; + +static int __init bpf_oom_init(void) +{ + int err; + + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, + &bpf_oom_kfunc_set); + if (err) + pr_warn("error while registering bpf oom kfuncs: %d", err); + + return err; +} +late_initcall(bpf_oom_init); + +#endif -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:11 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Introduce bpf_out_of_memory() bpf kfunc, which allows to declare an out of memory events and trigger the corresponding kernel OOM handling mechanism. It takes a trusted memcg pointer (or NULL for system-wide OOMs) as an argument, as well as the page order. If the BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK flag is not set, only one OOM can be declared and handled in the system at once, so if the function is called in parallel to another OOM handling, it bails out with -EBUSY. This mode is suited for global OOM's: any concurrent OOMs will likely do the job and release some memory. In a blocking mode (which is suited for memcg OOMs) the execution will wait on the oom_lock mutex. The function is declared as sleepable. It guarantees that it won't be called from an atomic context. It's required by the OOM handling code, which shouldn't be called from a non-blocking context. Handling of a memcg OOM almost always requires taking of the css_set_lock spinlock. The fact that bpf_out_of_memory() is sleepable also guarantees that it can't be called with acquired css_set_lock, so the kernel can't deadlock on it. To avoid deadlocks on the oom lock, the function is filtered out for bpf oom struct ops programs and all tracing programs. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/oom.h | 5 +++ mm/oom_kill.c | 85 +++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 88 insertions(+), 2 deletions(-) diff --git a/include/linux/oom.h b/include/linux/oom.h index c2dce336bcb4..851dba9287b5 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -21,6 +21,11 @@ enum oom_constraint { CONSTRAINT_MEMCG, }; +enum bpf_oom_flags { + BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK = 1 << 0, + BPF_OOM_FLAGS_LAST = 1 << 1, +}; + /* * Details of the page allocation that triggered the oom killer that are used to * determine what should be killed. diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 09897597907f..8f63a370b8f5 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1334,6 +1334,53 @@ __bpf_kfunc int bpf_oom_kill_process(struct oom_control *oc, return 0; } +/** + * bpf_out_of_memory - declare Out Of Memory state and invoke OOM killer + * @memcg__nullable: memcg or NULL for system-wide OOMs + * @order: order of page which wasn't allocated + * @flags: flags + * + * Declares the Out Of Memory state and invokes the OOM killer. + * + * OOM handlers are synchronized using the oom_lock mutex. If wait_on_oom_lock + * is true, the function will wait on it. Otherwise it bails out with -EBUSY + * if oom_lock is contended. + * + * Generally it's advised to pass wait_on_oom_lock=false for global OOMs + * and wait_on_oom_lock=true for memcg-scoped OOMs. + * + * Returns 1 if the forward progress was achieved and some memory was freed. + * Returns a negative value if an error occurred. + */ +__bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable, + int order, u64 flags) +{ + struct oom_control oc = { + .memcg = memcg__nullable, + .gfp_mask = GFP_KERNEL, + .order = order, + }; + int ret; + + if (flags & ~(BPF_OOM_FLAGS_LAST - 1)) + return -EINVAL; + + if (oc.order < 0 || oc.order > MAX_PAGE_ORDER) + return -EINVAL; + + if (flags & BPF_OOM_FLAGS_WAIT_ON_OOM_LOCK) { + ret = mutex_lock_killable(&oom_lock); + if (ret) + return ret; + } else if (!mutex_trylock(&oom_lock)) + return -EBUSY; + + ret = out_of_memory(&oc); + + mutex_unlock(&oom_lock); + return ret; +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(bpf_oom_kfuncs) @@ -1356,14 +1403,48 @@ static const struct btf_kfunc_id_set bpf_oom_kfunc_set = { .filter = bpf_oom_kfunc_filter, }; +BTF_KFUNCS_START(bpf_declare_oom_kfuncs) +BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE) +BTF_KFUNCS_END(bpf_declare_oom_kfuncs) + +static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id) +{ + if (!btf_id_set8_contains(&bpf_declare_oom_kfuncs, kfunc_id)) + return 0; + + if (prog->type == BPF_PROG_TYPE_STRUCT_OPS && + prog->aux->attach_btf_id == bpf_oom_ops_ids[0]) + return -EACCES; + + if (prog->type == BPF_PROG_TYPE_TRACING) + return -EACCES; + + return 0; +} + +static const struct btf_kfunc_id_set bpf_declare_oom_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_declare_oom_kfuncs, + .filter = bpf_declare_oom_kfunc_filter, +}; + static int __init bpf_oom_init(void) { int err; err = register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_oom_kfunc_set); - if (err) - pr_warn("error while registering bpf oom kfuncs: %d", err); + if (err) { + pr_warn("error while registering struct_ops bpf oom kfuncs: %d", err); + return err; + } + + err = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, + &bpf_declare_oom_kfunc_set); + if (err) { + pr_warn("error while registering unspec bpf oom kfuncs: %d", err); + return err; + } return err; } -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:12 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Export tsk_is_oom_victim() helper as a BPF kfunc. It's very useful to avoid redundant oom kills. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Suggested-by: Michal Hocko <mhocko@suse.com> --- mm/oom_kill.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 8f63a370b8f5..53f9f9674658 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1381,10 +1381,24 @@ __bpf_kfunc int bpf_out_of_memory(struct mem_cgroup *memcg__nullable, return ret; } +/** + * bpf_task_is_oom_victim - Check if the task has been marked as an OOM victim + * @task: task to check + * + * Returns true if the task has been previously selected by the OOM killer + * to be killed. It's expected that the task will be destroyed soon and some + * memory will be freed, so maybe no additional actions required. + */ +__bpf_kfunc bool bpf_task_is_oom_victim(struct task_struct *task) +{ + return tsk_is_oom_victim(task); +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(bpf_oom_kfuncs) BTF_ID_FLAGS(func, bpf_oom_kill_process, KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_task_is_oom_victim) BTF_KFUNCS_END(bpf_oom_kfuncs) BTF_ID_LIST_SINGLE(bpf_oom_ops_ids, struct, bpf_oom_ops) -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:13 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Implement read_cgroup_file() helper to read from cgroup control files, e.g. statistics. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- tools/testing/selftests/bpf/cgroup_helpers.c | 45 ++++++++++++++++++++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 ++ 2 files changed, 48 insertions(+) diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c index 20cede4db3ce..fc5f22409ce5 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.c +++ b/tools/testing/selftests/bpf/cgroup_helpers.c @@ -126,6 +126,51 @@ int enable_controllers(const char *relative_path, const char *controllers) return __enable_controllers(cgroup_path, controllers); } +static ssize_t __read_cgroup_file(const char *cgroup_path, const char *file, + char *buf, size_t size) +{ + char file_path[PATH_MAX + 1]; + ssize_t ret; + int fd; + + snprintf(file_path, sizeof(file_path), "%s/%s", cgroup_path, file); + fd = open(file_path, O_RDONLY); + if (fd < 0) { + log_err("Opening %s", file_path); + return -1; + } + + ret = read(fd, buf, size); + if (ret < 0) { + close(fd); + log_err("Reading %s", file_path); + return -1; + } + + close(fd); + return ret; +} + +/** + * read_cgroup_file() - Read from a cgroup file + * @relative_path: The cgroup path, relative to the workdir + * @file: The name of the file in cgroupfs to read from + * @buf: Buffer to read from the file + * @size: Size of the buffer + * + * Read from a file in the given cgroup's directory. + * + * If successful, the number of read bytes is returned. + */ +ssize_t read_cgroup_file(const char *relative_path, const char *file, + char *buf, size_t size) +{ + char cgroup_path[PATH_MAX - 24]; + + format_cgroup_path(cgroup_path, relative_path); + return __read_cgroup_file(cgroup_path, file, buf, size); +} + static int __write_cgroup_file(const char *cgroup_path, const char *file, const char *buf) { diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h index 3857304be874..66a08b64838b 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.h +++ b/tools/testing/selftests/bpf/cgroup_helpers.h @@ -4,6 +4,7 @@ #include <errno.h> #include <string.h> +#include <sys/types.h> #define clean_errno() (errno == 0 ? "None" : strerror(errno)) #define log_err(MSG, ...) fprintf(stderr, "(%s:%d: errno: %s) " MSG "\n", \ @@ -11,6 +12,8 @@ /* cgroupv2 related */ int enable_controllers(const char *relative_path, const char *controllers); +ssize_t read_cgroup_file(const char *relative_path, const char *file, + char *buf, size_t size); int write_cgroup_file(const char *relative_path, const char *file, const char *buf); int write_cgroup_file_parent(const char *relative_path, const char *file, -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:14 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Implement a kselftest for the OOM handling functionality. The OOM handling policy which is implemented in BPF is to kill all tasks belonging to the biggest leaf cgroup, which doesn't contain unkillable tasks (tasks with oom_score_adj set to -1000). Pagecache size is excluded from the accounting. The test creates a hierarchy of memory cgroups, causes an OOM at the top level, checks that the expected process is killed and verifies the memcg's oom statistics. The same BPF OOM policy is attached to a memory cgroup and system-wide. In the first case the program does nothing and returns false, so it's executed the second time, when it properly handles the OOM. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ 2 files changed, 367 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c diff --git a/tools/testing/selftests/bpf/prog_tests/test_oom.c b/tools/testing/selftests/bpf/prog_tests/test_oom.c new file mode 100644 index 000000000000..a1eadbe1ae83 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/test_oom.c @@ -0,0 +1,256 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include <test_progs.h> +#include <bpf/btf.h> +#include <bpf/bpf.h> + +#include "cgroup_helpers.h" +#include "test_oom.skel.h" + +struct cgroup_desc { + const char *path; + int fd; + unsigned long long id; + int pid; + size_t target; + size_t max; + int oom_score_adj; + bool victim; +}; + +#define MB (1024 * 1024) +#define OOM_SCORE_ADJ_MIN (-1000) +#define OOM_SCORE_ADJ_MAX 1000 + +static struct cgroup_desc cgroups[] = { + { .path = "/oom_test", .max = 80 * MB}, + { .path = "/oom_test/cg1", .target = 10 * MB, + .oom_score_adj = OOM_SCORE_ADJ_MAX }, + { .path = "/oom_test/cg2", .target = 40 * MB, + .oom_score_adj = OOM_SCORE_ADJ_MIN }, + { .path = "/oom_test/cg3" }, + { .path = "/oom_test/cg3/cg4", .target = 30 * MB, + .victim = true }, + { .path = "/oom_test/cg3/cg5", .target = 20 * MB }, +}; + +static int spawn_task(struct cgroup_desc *desc) +{ + char *ptr; + int pid; + + pid = fork(); + if (pid < 0) + return pid; + + if (pid > 0) { + /* parent */ + desc->pid = pid; + return 0; + } + + /* child */ + if (desc->oom_score_adj) { + char buf[64]; + int fd = open("/proc/self/oom_score_adj", O_WRONLY); + + if (fd < 0) + return -1; + + snprintf(buf, sizeof(buf), "%d", desc->oom_score_adj); + write(fd, buf, sizeof(buf)); + close(fd); + } + + ptr = (char *)malloc(desc->target); + if (!ptr) + return -ENOMEM; + + memset(ptr, 'a', desc->target); + + while (1) + sleep(1000); + + return 0; +} + +static void setup_environment(void) +{ + int i, err; + + err = setup_cgroup_environment(); + if (!ASSERT_OK(err, "setup_cgroup_environment")) + goto cleanup; + + for (i = 0; i < ARRAY_SIZE(cgroups); i++) { + cgroups[i].fd = create_and_get_cgroup(cgroups[i].path); + if (!ASSERT_GE(cgroups[i].fd, 0, "create_and_get_cgroup")) + goto cleanup; + + cgroups[i].id = get_cgroup_id(cgroups[i].path); + if (!ASSERT_GT(cgroups[i].id, 0, "get_cgroup_id")) + goto cleanup; + + /* Freeze the top-level cgroup */ + if (i == 0) { + /* Freeze the top-level cgroup */ + err = write_cgroup_file(cgroups[i].path, "cgroup.freeze", "1"); + if (!ASSERT_OK(err, "freeze cgroup")) + goto cleanup; + } + + /* Recursively enable the memory controller */ + if (!cgroups[i].target) { + + err = write_cgroup_file(cgroups[i].path, "cgroup.subtree_control", + "+memory"); + if (!ASSERT_OK(err, "enable memory controller")) + goto cleanup; + } + + /* Set memory.max */ + if (cgroups[i].max) { + char buf[256]; + + snprintf(buf, sizeof(buf), "%lu", cgroups[i].max); + err = write_cgroup_file(cgroups[i].path, "memory.max", buf); + if (!ASSERT_OK(err, "set memory.max")) + goto cleanup; + + snprintf(buf, sizeof(buf), "0"); + write_cgroup_file(cgroups[i].path, "memory.swap.max", buf); + + } + + /* Spawn tasks creating memory pressure */ + if (cgroups[i].target) { + char buf[256]; + + err = spawn_task(&cgroups[i]); + if (!ASSERT_OK(err, "spawn task")) + goto cleanup; + + snprintf(buf, sizeof(buf), "%d", cgroups[i].pid); + err = write_cgroup_file(cgroups[i].path, "cgroup.procs", buf); + if (!ASSERT_OK(err, "put child into a cgroup")) + goto cleanup; + } + } + + return; + +cleanup: + cleanup_cgroup_environment(); + + // TODO return an error? +} + +static int run_and_wait_for_oom(void) +{ + int ret = -1; + bool first = true; + char buf[4096] = {}; + size_t size; + + /* Unfreeze the top-level cgroup */ + ret = write_cgroup_file(cgroups[0].path, "cgroup.freeze", "0"); + if (!ASSERT_OK(ret, "freeze cgroup")) + return -1; + + for (;;) { + int i, status; + pid_t pid = wait(&status); + + if (pid == -1) { + if (errno == EINTR) + continue; + /* ECHILD */ + break; + } + + if (!first) + continue; + + first = false; + + /* Check which process was terminated first */ + for (i = 0; i < ARRAY_SIZE(cgroups); i++) { + if (!ASSERT_OK(cgroups[i].victim != + (pid == cgroups[i].pid), + "correct process was killed")) { + ret = -1; + break; + } + + if (!cgroups[i].victim) + continue; + + /* Check the memcg oom counter */ + size = read_cgroup_file(cgroups[i].path, + "memory.events", + buf, sizeof(buf)); + if (!ASSERT_OK(size <= 0, "read memory.events")) { + ret = -1; + break; + } + + if (!ASSERT_OK(strstr(buf, "oom_kill 1") == NULL, + "oom_kill count check")) { + ret = -1; + break; + } + } + + /* Kill all remaining tasks */ + for (i = 0; i < ARRAY_SIZE(cgroups); i++) + if (cgroups[i].pid && cgroups[i].pid != pid) + kill(cgroups[i].pid, SIGKILL); + } + + return ret; +} + +void test_oom(void) +{ + DECLARE_LIBBPF_OPTS(bpf_struct_ops_opts, opts); + struct bpf_link *link1 = NULL, *link2 = NULL; + struct test_oom *skel; + int err = 0; + + setup_environment(); + + skel = test_oom__open_and_load(); + if (!skel) { + err = -errno; + CHECK_FAIL(err); + goto cleanup; + } + + opts.flags = BPF_F_CGROUP_FD; + opts.target_fd = cgroups[0].fd; + link1 = bpf_map__attach_struct_ops_opts(skel->maps.test_bpf_oom, &opts); + if (!link1) { + err = -errno; + CHECK_FAIL(err); + goto cleanup; + } + + opts.target_fd = get_root_cgroup(); + link2 = bpf_map__attach_struct_ops_opts(skel->maps.test_bpf_oom, &opts); + if (!link2) { + err = -errno; + CHECK_FAIL(err); + goto cleanup; + } + + /* Unfreeze all child tasks and create the memory pressure */ + err = run_and_wait_for_oom(); + CHECK_FAIL(err); + +cleanup: + bpf_link__destroy(link1); + bpf_link__destroy(link2); + write_cgroup_file(cgroups[0].path, "cgroup.kill", "1"); + write_cgroup_file(cgroups[0].path, "cgroup.freeze", "0"); + cleanup_cgroup_environment(); + test_oom__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/test_oom.c b/tools/testing/selftests/bpf/progs/test_oom.c new file mode 100644 index 000000000000..7ff354e416bc --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_oom.c @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +char _license[] SEC("license") = "GPL"; + +#define OOM_SCORE_ADJ_MIN (-1000) + +static bool mem_cgroup_killable(struct mem_cgroup *memcg) +{ + struct task_struct *task; + bool ret = true; + + bpf_for_each(css_task, task, &memcg->css, CSS_TASK_ITER_PROCS) + if (task->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) + return false; + + return ret; +} + +/* + * Find the largest leaf cgroup (ignoring page cache) without unkillable tasks + * and kill all belonging tasks. + */ +SEC("struct_ops.s/handle_out_of_memory") +int BPF_PROG(test_out_of_memory, struct oom_control *oc, struct bpf_struct_ops_link *link) +{ + struct task_struct *task; + struct mem_cgroup *root_memcg = oc->memcg; + struct mem_cgroup *memcg, *victim = NULL; + struct cgroup_subsys_state *css_pos, *css; + unsigned long usage, max_usage = 0; + unsigned long pagecache = 0; + int ret = 0; + + if (root_memcg) + root_memcg = bpf_get_mem_cgroup(&root_memcg->css); + else + root_memcg = bpf_get_root_mem_cgroup(); + + if (!root_memcg) + return 0; + + css = &root_memcg->css; + if (css && css->cgroup == link->cgroup) + goto exit; + + bpf_rcu_read_lock(); + bpf_for_each(css, css_pos, &root_memcg->css, BPF_CGROUP_ITER_DESCENDANTS_POST) { + if (css_pos->cgroup->nr_descendants + css_pos->cgroup->nr_dying_descendants) + continue; + + memcg = bpf_get_mem_cgroup(css_pos); + if (!memcg) + continue; + + usage = bpf_mem_cgroup_usage(memcg); + pagecache = bpf_mem_cgroup_page_state(memcg, NR_FILE_PAGES); + + if (usage > pagecache) + usage -= pagecache; + else + usage = 0; + + if ((usage > max_usage) && mem_cgroup_killable(memcg)) { + max_usage = usage; + if (victim) + bpf_put_mem_cgroup(victim); + victim = bpf_get_mem_cgroup(&memcg->css); + } + + bpf_put_mem_cgroup(memcg); + } + bpf_rcu_read_unlock(); + + if (!victim) + goto exit; + + bpf_for_each(css_task, task, &victim->css, CSS_TASK_ITER_PROCS) { + struct task_struct *t = bpf_task_acquire(task); + + if (t) { + /* + * If the task is already an OOM victim, it will + * quit soon and release some memory. + */ + if (bpf_task_is_oom_victim(task)) { + bpf_task_release(t); + ret = 1; + break; + } + + bpf_oom_kill_process(oc, task, "bpf oom test"); + bpf_task_release(t); + ret = 1; + } + } + + bpf_put_mem_cgroup(victim); +exit: + bpf_put_mem_cgroup(root_memcg); + + return ret; +} + +SEC(".struct_ops.link") +struct bpf_oom_ops test_bpf_oom = { + .name = "bpf_test_policy", + .handle_out_of_memory = (void *)test_out_of_memory, +}; -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:15 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Add a trace point to psi_avgs_work(). It can be used to attach a bpf handler which can monitor PSI values system-wide or for specific cgroup(s) and potentially perform some actions, e.g. declare an OOM. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/trace/events/psi.h | 27 +++++++++++++++++++++++++++ kernel/sched/psi.c | 6 ++++++ 2 files changed, 33 insertions(+) create mode 100644 include/trace/events/psi.h diff --git a/include/trace/events/psi.h b/include/trace/events/psi.h new file mode 100644 index 000000000000..57c46de18616 --- /dev/null +++ b/include/trace/events/psi.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM psi + +#if !defined(_TRACE_PSI_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_PSI_H + +#include <linux/tracepoint.h> + +TRACE_EVENT(psi_avgs_work, + TP_PROTO(struct psi_group *group), + TP_ARGS(group), + TP_STRUCT__entry( + __field(struct psi_group *, group) + ), + + TP_fast_assign( + __entry->group = group; + ), + + TP_printk("group=%p", __entry->group) +); + +#endif /* _TRACE_PSI_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 59fdb7ebbf22..72757ba2ed96 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -141,6 +141,10 @@ #include <linux/psi.h> #include "sched.h" +#define CREATE_TRACE_POINTS +#include <trace/events/psi.h> +#undef CREATE_TRACE_POINTS + static int psi_bug __read_mostly; DEFINE_STATIC_KEY_FALSE(psi_disabled); @@ -607,6 +611,8 @@ static void psi_avgs_work(struct work_struct *work) group->avg_next_update - now) + 1); } + trace_psi_avgs_work(group); + mutex_unlock(&group->avgs_lock); } -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:16 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
To allow a more efficient filtering of cgroups in the psi work tracepoint handler, let's add a u64 cgroup_id field to the psi_group structure. For system PSI, 0 will be used. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- include/linux/psi_types.h | 4 ++++ kernel/sched/psi.c | 1 + 2 files changed, 5 insertions(+) diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h index dd10c22299ab..749a08d48abd 100644 --- a/include/linux/psi_types.h +++ b/include/linux/psi_types.h @@ -159,6 +159,10 @@ struct psi_trigger { struct psi_group { struct psi_group *parent; + + /* Cgroup id for cgroup PSI, 0 for system PSI */ + u64 cgroup_id; + bool enabled; /* Protects data used by the aggregator */ diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 72757ba2ed96..cf1ec4dc242b 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -1124,6 +1124,7 @@ int psi_cgroup_alloc(struct cgroup *cgroup) if (!cgroup->psi) return -ENOMEM; + cgroup->psi->cgroup_id = cgroup_id(cgroup); cgroup->psi->pcpu = alloc_percpu(struct psi_group_cpu); if (!cgroup->psi->pcpu) { kfree(cgroup->psi); -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:17 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Allow calling bpf_out_of_memory() from a PSI tracepoint to enable PSI-based OOM killer policies. Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> --- mm/oom_kill.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 53f9f9674658..276cf8a34449 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1421,6 +1421,13 @@ BTF_KFUNCS_START(bpf_declare_oom_kfuncs) BTF_ID_FLAGS(func, bpf_out_of_memory, KF_SLEEPABLE) BTF_KFUNCS_END(bpf_declare_oom_kfuncs) +BTF_ID_LIST(bpf_oom_trace_ids) +#ifdef CONFIG_PSI +BTF_ID(typedef, btf_trace_psi_avgs_work) +#else +BTF_ID_UNUSED +#endif + static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id) { if (!btf_id_set8_contains(&bpf_declare_oom_kfuncs, kfunc_id)) @@ -1430,7 +1437,8 @@ static int bpf_declare_oom_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_i prog->aux->attach_btf_id == bpf_oom_ops_ids[0]) return -EACCES; - if (prog->type == BPF_PROG_TYPE_TRACING) + if (prog->type == BPF_PROG_TYPE_TRACING && + prog->aux->attach_btf_id != bpf_oom_trace_ids[0]) return -EACCES; return 0; -- 2.52.0
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Mon, 26 Jan 2026 18:44:18 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
[ ... ] ^^^^^^^^^^^^^^^^^^^^^^^^ This isn't a bug, but should this say "Buffer to store data read from the file" or "Buffer to read into"? The current wording suggests data is read from the buffer, but the code reads from the file into the buffer. Compare with write_cgroup_file() which says "Buffer to write to the file" - that phrasing works for writes but the inverse doesn't quite work for reads. [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21382620878 AI-authorship-score: low AI-authorship-explanation: The code closely follows existing patterns in the file with a minor documentation error typical of copy-paste adaptation, suggesting human authorship. issues-found: 1 issue-severity-score: low issue-severity-explanation: Minor documentation error in test code where @buf description says the opposite of what the code does.
{ "author": "bot+bpf-ci@kernel.org", "date": "Tue, 27 Jan 2026 03:08:46 +0000 (UTC)", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
[ ... ] Can this code access st_link after it has been freed? Once st_link->cgroup is set to NULL, if another CPU concurrently closes the link FD, bpf_struct_ops_map_link_dealloc() will see st_link->cgroup as NULL, skip the cgroup_bpf_detach_struct_ops() call, and proceed to kfree(st_link). Then the bpf_link_inc_not_zero() call here would access freed memory. The race window is between setting st_link->cgroup = NULL and calling bpf_link_inc_not_zero(). During this window: CPU 0 (cgroup_bpf_release) CPU 1 (close link FD) --------------------------- --------------------- st_link->cgroup = NULL bpf_link_put_direct() bpf_link_free() bpf_struct_ops_map_link_dealloc() if (st_link->cgroup) // false kfree(st_link) // freed! bpf_link_inc_not_zero() // use-after-free Would it be safer to call bpf_link_inc_not_zero() before modifying st_link fields? That way, if the link is already at refcount zero, the loop would skip it; otherwise, the held reference would prevent the concurrent free. --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21382620878 AI-authorship-score: low AI-authorship-explanation: The commit is authored by a known kernel maintainer with a consistent, human-like technical writing style and appropriate kernel development patterns. issues-found: 1 issue-severity-score: medium issue-severity-explanation: Use-after-free race condition in cgroup_bpf_release() that can cause kernel crash when link FD is closed concurrently with cgroup deletion.
{ "author": "bot+bpf-ci@kernel.org", "date": "Tue, 27 Jan 2026 03:08:48 +0000 (UTC)", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
[ ... ] The code reads flags and target_fd from opts, but the expected_revision field declared in struct bpf_struct_ops_opts is never read. Looking at similar opts structures (bpf_cgroup_opts, bpf_tcx_opts, bpf_netkit_opts), they all pass expected_revision to bpf_link_create_opts. For example: link_create_opts.cgroup.expected_revision = OPTS_GET(opts, expected_revision, 0); However, bpf_link_create_opts has no struct_ops sub-struct to receive expected_revision, and the kernel's bpf_attr link_create union has no struct_ops.expected_revision field either. Is expected_revision in bpf_struct_ops_opts intentionally unused? If so, should the field be removed from the struct to avoid user confusion? If users set expected_revision expecting it to be honored, the value is silently ignored. [ ... ] ^^^^^^^^^^^^^^^^^ The expected_revision field is declared here, matching the pattern used in bpf_cgroup_opts, bpf_tcx_opts, and bpf_netkit_opts. However, unlike those structures where expected_revision is read and passed through, this field has no corresponding code path in bpf_map__attach_struct_ops_opts(). [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21382620878 AI-authorship-score: low AI-authorship-explanation: The commit follows kernel conventions and existing code patterns, with an oversight (unused expected_revision field) more typical of human development than AI-generated code. issues-found: 1 issue-severity-score: low issue-severity-explanation: The expected_revision field in bpf_struct_ops_opts is declared but never used, causing silent data loss if users set this field expecting it to be honored.
{ "author": "bot+bpf-ci@kernel.org", "date": "Tue, 27 Jan 2026 03:08:49 +0000 (UTC)", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:47 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Since both fdinfo and link_info show the cgroup ID, why not use BPF_F_CGROUP_ID for better alignment? We may need to support other structs in the future. Could we implement a more generic solution, such as: int type; // cgroup, task, etc void *private; // ptr to type-specific data -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 13:49:18 +0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:46 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Feel free to add: Acked-by: Yafang Shao <laoar.shao@gmail.com> -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 13:50:31 +0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:53 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Acked-by: Yafang Shao <laoar.shao@gmail.com> -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 13:52:41 +0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Acked-by: Yafang Shao <laoar.shao@gmail.com> -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 14:06:20 +0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue, Jan 27, 2026 at 10:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: Given that mem_cgroup_ino() pairs with mem_cgroup_get_from_ino(), should we also define mem_cgroup_ino() outside CONFIG_SHRINKER_DEBUG? -- Regards Yafang
{ "author": "Yafang Shao <laoar.shao@gmail.com>", "date": "Tue, 27 Jan 2026 14:12:17 +0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Mon 26-01-26 18:44:03, Roman Gushchin wrote: Are you planning to write any highlevel documentation on how to use the existing infrastructure to implement proper/correct OOM handlers with these generic interfaces? -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Tue, 27 Jan 2026 10:02:38 +0100", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Mon 26-01-26 18:44:10, Roman Gushchin wrote: I still find this dual reporting a bit confusing. I can see your intention in having a pre-defined "releasers" of the memory to trust BPF handlers more but they do have access to oc->bpf_memory_freed so they can manipulate it. Therefore an additional level of protection is rather weak. It is also not really clear to me how this works while there is OOM victim on the way out. (i.e. tsk_is_oom_victim() -> abort case). This will result in no killing therefore no bpf_memory_freed, right? Handler itself should consider its work done. How exactly is this handled. Also is there any way to handle the oom by increasing the memcg limit? I do not see a callback for that. -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Tue, 27 Jan 2026 10:38:42 +0100", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On 1/26/26 6:44 PM, Roman Gushchin wrote: The filter callback is registered for BPF_PROG_TYPE_STRUCT_OPS. It is checking if a kfunc_id is allowed for other struct_ops progs also, e.g. the bpf-tcp-cc struct_ops progs. The 'return -EACCES' should be the cause of the "calling kernel function XXX is not allowed" error reported by the CI. Take a look at btf_kfunc_is_allowed(). Take a look at bpf_qdisc_kfunc_filter(). I suspect it should be something like this, untested: if (btf_id_set8_contains(&bpf_oom_kfuncs, kfunc_id) && prog->aux->st_ops != &bpf_oom_bpf_ops) return -EACCES; return 0;
{ "author": "Martin KaFai Lau <martin.lau@linux.dev>", "date": "Tue, 27 Jan 2026 12:21:03 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Martin KaFai Lau <martin.lau@linux.dev> writes: Oh, I see.. It's a bit surprising that these .filter() functions have non-local effects... Will fix in v4. Thank you, Martin!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Tue, 27 Jan 2026 20:47:11 +0000", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Michal Hocko <mhocko@suse.com> writes: What do you expect from such a document, can you, please, elaborate? I'm asking because the main promise of bpf is to provide some sort of a safe playground, so anyone can experiment with writing their bpf implementations (like sched_ext schedulers or bpf oom policies) with minimum risk. Yes, it might work sub-optimally and kill too many tasks, but it won't crash or deadlock the system. So in way I don't want to prescribe the "right way" of writing oom handler, but it totally makes sense to provide an example. As of now the best way to get an example of a bpf handler is to look into the commit "[PATCH bpf-next v3 12/17] bpf: selftests: BPF OOM struct ops test". Another viable idea (also suggested by Andrew Morton) is to develop a production ready memcg-aware OOM killer in BPF, put the source code into the kernel tree and make it loadable by default (obviously under a config option). Myself or one of my colleagues will try to explore it a bit later: the tricky part is this by-default loading because there are no existing precedents. Thanks!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Tue, 27 Jan 2026 21:01:48 +0000", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Michal Hocko <mhocko@suse.com> writes: No, they can't. They have only a read-only access. It's a good question, I see your point... Basically we want to give a handler an option to exit with "I promise, some memory will be freed soon" without doing anything destructive. But keeping it save at the same time. I don't have a perfect answer out of my head, maybe some sort of a rate-limiter/counter might work? E.g. a handler can promise this N times before the kernel kicks in? Any ideas? There is no kfunc yet, but it's a good idea (which we accidentally discussed few days ago). I'll implement it. Thank you!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Tue, 27 Jan 2026 21:12:56 +0000", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Hi Roman, On Mon, Jan 26, 2026 at 6:50 PM Roman Gushchin <roman.gushchin@linux.dev> wrote: [snip] I was worried about concurrency with cgroup ops until I saw cgroup_bpf_detach_struct_ops() takes cgroup_lock() internally (since you take it inline sometimes below I falsely assumed it wasn't present). In any case, I'm wondering why you need to pass in the cgroup pointer to cgroup_bpf_detach_struct_ops() at all, rather than just the link? We have to be careful at this point. cgroup release could now occur concurrently which would clear link->cgroup. Maybe worth a comment here since this is a bit subtle.
{ "author": "Josh Don <joshdon@google.com>", "date": "Tue, 27 Jan 2026 19:10:35 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Thanks Roman! On Mon, Jan 26, 2026 at 6:51 PM Roman Gushchin <roman.gushchin@linux.dev> wrote: If bpf claims to have freed memory but didn't actually do so, that seems like something potentially worth alerting to. Perhaps something to add to the oom header output?
{ "author": "Josh Don <joshdon@google.com>", "date": "Tue, 27 Jan 2026 19:26:57 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue 27-01-26 21:12:56, Roman Gushchin wrote: Could you explain this a bit more. This must be some BPF magic because they are getting a standard pointer to oom_control. Yes, something like OOM_BACKOFF, OOM_PROCESSED, OOM_FAILED. Counters usually do not work very well for async operations. In this case there is oom_repaer and/or task exit to finish the oom operation. The former is bound and guaranteed to make a forward progress but there is no time frame to assume when that happens as it depends on how many tasks might be queued (usually a single one but this is not something to rely on because of concurrent ooms in memcgs and also multiple tasks could be killed at the same time). Another complication is that there are multiple levels of OOM to track (global, NUMA, memcg) so any watchdog would have to be aware of that as well. I am really wondering whether we really need to be so careful with handlers. It is not like you would allow any random oom handler to be loaded, right? Would it make sense to start without this protection and converge to something as we see how this evolves? Maybe this will raise the bar for oom handlers as the price for bugs is going to be really high. Cool! -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Wed, 28 Jan 2026 09:00:45 +0100", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Tue 27-01-26 21:01:48, Roman Gushchin wrote: Sure. Essentially an expected structure of the handler. What is the API it can use, what is has to do and what it must not do. Essentially a single place you can read and get enough information to start developing your oom handler. Examples are really great but having a central place to document available API is much more helpful IMHO. The generally scattered nature of BPF hooks makes it really hard to even know what is available to oom handlers to use. It certainly makes sense to have trusted implementation of a commonly requested oom policy that we couldn't implement due to specific nature that doesn't really apply to many users. And have that in the tree. I am not thrilled about auto-loading because this could be easily done by a simple tooling. -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Wed, 28 Jan 2026 09:06:14 +0100", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Once additional point I forgot to mention previously On Mon 26-01-26 18:44:10, Roman Gushchin wrote: Should this check for is_sysrq_oom and always use the in kernel OOM handling for Sysrq triggered ooms as a failsafe measure? -- Michal Hocko SUSE Labs
{ "author": "Michal Hocko <mhocko@suse.com>", "date": "Wed, 28 Jan 2026 12:19:42 +0100", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Mon, Jan 26, 2026 at 06:44:05PM -0800, Roman Gushchin wrote: Assigning 0 to cgrp_id would technically be incorrect, right? Like, cgroup_id() for !CONFIG_CGROUPS default to returning 1, and for CONFIG_CGROUPS the ID allocation is done via the idr_alloc_cyclic() API using a range between 1 and INT_MAX. Perhaps here it serves as a valid sentinel value? Is that the rationale? In general, shouldn't all the cgroup related logic within this source file be protected by a CONFIG_CGROUPS ifdef? For example, both cgroup_get_from_fd() and cgroup_put() lack stubs when building with !CONFIG_CGROUPS. Probably could introduce a simple inline helper for the cgroup_lock()/cgroup_id()/cgroup_unlock() dance that's going on in here and bpf_struct_ops_map_link_fill_link_info() below. As mentioned above a simple inline helper could simply yield the following here: ... info->struct_ops.cgroup_id = bpf_struct_ops_lin_cgroup_id(); ... BPF_F_CGROUP_FD is dependent on the cgroup subsystem, therefore it probably makes some sense to only accept BPF_F_CGROUP_FD when CONFIG_BPF_CGROUP is enabled, otherwise -EOPNOTSUPP? I'd also probably rewrite this such that we do: ... struct cgroup *cgrp = NULL; ... if (attr->link_create.flags & ~BPF_F_CGROUP_FD) { #if IS_ENABLED(CONFIG_CGROUP_BPF) cgrp = cgroup_get_from_fd(attr->link_create.target_fd); if (IS_ERR(cgrp)) return PTR_ERR(cgrp); #else return -EOPNOTSUPP; #endif } ... if (cgrp) { link->cgroup = cgrp; if (cgroup_bpf_attach_struct_ops(cgrp, link)) { cgroup_put(cgrp); goto err_out; } } IMO the code is cleaner and reads better too. If the cgroup is dying, then perhaps -EINVAL would be more appropriate here, no? I'd argue that -EBUSY implies a temporary or transient state. Within cgroup_bpf_attach_struct_ops() and cgroup_bpf_detach_struct_ops() the cgrp pointer appears to be superfluous? Both should probably only operate on link->cgroup instead? A !link->cgroup when calling either should be considered as -EINVAL.
{ "author": "Matt Bobrowski <mattbobrowski@google.com>", "date": "Wed, 28 Jan 2026 11:25:31 +0000", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Mon, Jan 26, 2026 at 06:44:04PM -0800, Roman Gushchin wrote: Looks OK to me: Acked-by: Matt Bobrowski <mattbobrowski@google.com>
{ "author": "Matt Bobrowski <mattbobrowski@google.com>", "date": "Wed, 28 Jan 2026 11:28:48 +0000", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
On Wed, Jan 28, 2026 at 12:06 AM Michal Hocko <mhocko@suse.com> wrote: Production ready bpf-oom program(s) must be part of this set. We've seen enough attempts to add bpf st_ops in various parts of the kernel without providing realistic bpf progs that will drive those hooks. It's great to have flexibility and people need to have a freedom to develop their own bpf-oom policy, but the author of the patch set who's advocating for the new bpf hooks must provide their real production progs and share their real use case with the community. It's not cool to hide it. In that sense enabling auto-loading without requiring an end user to install the toolchain and build bpf programs/rust/whatnot is necessary too. bpf-oom can be a self contained part of vmlinux binary. We already have a mechanism to do that. This way the end user doesn't need to be a bpf expert, doesn't need to install clang, build the tools, etc. They can just enable fancy new bpf-oom policy and see whether it's helping their apps or not while knowing nothing about bpf.
{ "author": "Alexei Starovoitov <alexei.starovoitov@gmail.com>", "date": "Wed, 28 Jan 2026 08:59:34 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Alexei Starovoitov <alexei.starovoitov@gmail.com> writes: In my case it's not about hiding, it's a chicken and egg problem: the upstream first model contradicts with the idea to include the production results into the patchset. In other words, I want to settle down the interface before shipping something to prod. I guess the compromise here is to initially include a bpf oom policy inspired by what systemd-oomd does and what is proven to work for a broad range of users. Policies suited for large datacenters can be added later, but also their generic usefulness might be limited by the need of proprietary userspace orchestration engines. Fully agree here. Will implement in v4. Thanks!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Wed, 28 Jan 2026 10:23:34 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Michal Hocko <mhocko@suse.com> writes: Yes, but bpf programs (unlike kernel modules) are going through the verifier when being loaded to the kernel. The verifier ensures that programs are safe: e.g. they can't access memory outside of safe areas, they can't can infinite loops, dereference a NULL pointer etc. So even it looks like a normal argument, it's read only. And the program can't even read the memory outside of the structure itself, e.g. a program doing something like (oc + 1)->bpf_memory_freed won't be allowed to load. Yeah, it has to be an atomic counter attached to the bpf oom "instance": a policy attached to a specific cgroup or system-wide. Right, bpf programs require CAP_SYSADMIN to be loaded. I still would prefer to keep it 100% safe, but the more I think about it the more I agree with you: likely limitations of the protection mechanism will create more issues than the value of the protection itself. Thank you!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Wed, 28 Jan 2026 10:44:46 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Josh Don <joshdon@google.com> writes: Hi Josh! Sure, good point. Agree, will add. Thanks!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Wed, 28 Jan 2026 10:52:05 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }
lkml
[PATCH bpf-next v3 00/17] mm: BPF OOM
This patchset adds an ability to customize the out of memory handling using bpf. It focuses on two parts: 1) OOM handling policy, 2) PSI-based OOM invocation. The idea to use bpf for customizing the OOM handling is not new, but unlike the previous proposal [1], which augmented the existing task ranking policy, this one tries to be as generic as possible and leverage the full power of the modern bpf. It provides a generic interface which is called before the existing OOM killer code and allows implementing any policy, e.g. picking a victim task or memory cgroup or potentially even releasing memory in other ways, e.g. deleting tmpfs files (the last one might require some additional but relatively simple changes). The past attempt to implement memory-cgroup aware policy [2] showed that there are multiple opinions on what the best policy is. As it's highly workload-dependent and specific to a concrete way of organizing workloads, the structure of the cgroup tree etc, a customizable bpf-based implementation is preferable over an in-kernel implementation with a dozen of sysctls. The second part is related to the fundamental question on when to declare the OOM event. It's a trade-off between the risk of unnecessary OOM kills and associated work losses and the risk of infinite trashing and effective soft lockups. In the last few years several PSI-based userspace solutions were developed (e.g. OOMd [3] or systemd-OOMd [4]). The common idea was to use userspace daemons to implement custom OOM logic as well as rely on PSI monitoring to avoid stalls. In this scenario the userspace daemon was supposed to handle the majority of OOMs, while the in-kernel OOM killer worked as the last resort measure to guarantee that the system would never deadlock on the memory. But this approach creates additional infrastructure churn: userspace OOM daemon is a separate entity which needs to be deployed, updated, monitored. A completely different pipeline needs to be built to monitor both types of OOM events and collect associated logs. A userspace daemon is more restricted in terms on what data is available to it. Implementing a daemon which can work reliably under a heavy memory pressure in the system is also tricky. This patchset includes the code, tests and many ideas from the patchset of JP Kobryn, which implemented bpf kfuncs to provide a faster method to access memcg data [5]. [1]: https://lwn.net/ml/linux-kernel/20230810081319.65668-1-zhouchuyi@bytedance.com/ [2]: https://lore.kernel.org/lkml/20171130152824.1591-1-guro@fb.com/ [3]: https://github.com/facebookincubator/oomd [4]: https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html [5]: https://lkml.org/lkml/2025/10/15/1554 --- v3: 1) Replaced bpf_psi struct ops with a tracepoint in psi_avgs_work() (Tejun H.) 2) Updated bpf_oom struct ops: - removed bpf_oom_ctx, passing bpf_struct_ops_link instead (by Alexei S.) - removed handle_cgroup_offline callback. 3) Updated kfuncs: - bpf_out_of_memory() dropped constraint_text argument (by Michal H.) - bpf_oom_kill_process() added check for OOM_SCORE_ADJ_MIN. 4) Libbpf: updated bpf_map__attach_struct_ops_opts to use target_fd. (by Alexei S.) v2: 1) A single bpf_oom can be attached system-wide and a single bpf_oom per memcg. (by Alexei Starovoitov) 2) Initial support for attaching struct ops to cgroups (Martin KaFai Lau, Andrii Nakryiko and others) 3) bpf memcontrol kfuncs enhancements and tests (co-developed by JP Kobryn) 4) Many mall-ish fixes and cleanups (suggested by Andrew Morton, Suren Baghdasaryan, Andrii Nakryiko and Kumar Kartikeya Dwivedi) 5) bpf_out_of_memory() is taking u64 flags instead of bool wait_on_oom_lock (suggested by Kumar Kartikeya Dwivedi) 6) bpf_get_mem_cgroup() got KF_RCU flag (suggested by Kumar Kartikeya Dwivedi) 7) cgroup online and offline callbacks for bpf_psi, cgroup offline for bpf_oom v1: 1) Both OOM and PSI parts are now implemented using bpf struct ops, providing a path the future extensions (suggested by Kumar Kartikeya Dwivedi, Song Liu and Matt Bobrowski) 2) It's possible to create PSI triggers from BPF, no need for an additional userspace agent. (suggested by Suren Baghdasaryan) Also there is now a callback for the cgroup release event. 3) Added an ability to block on oom_lock instead of bailing out (suggested by Michal Hocko) 4) Added bpf_task_is_oom_victim (suggested by Michal Hocko) 5) PSI callbacks are scheduled using a separate workqueue (suggested by Suren Baghdasaryan) RFC: https://lwn.net/ml/all/20250428033617.3797686-1-roman.gushchin@linux.dev/ JP Kobryn (1): bpf: selftests: add config for psi Roman Gushchin (16): bpf: move bpf_struct_ops_link into bpf.h bpf: allow attaching struct_ops to cgroups libbpf: fix return value on memory allocation failure libbpf: introduce bpf_map__attach_struct_ops_opts() bpf: mark struct oom_control's memcg field as TRUSTED_OR_NULL mm: define mem_cgroup_get_from_ino() outside of CONFIG_SHRINKER_DEBUG mm: introduce BPF OOM struct ops mm: introduce bpf_oom_kill_process() bpf kfunc mm: introduce bpf_out_of_memory() BPF kfunc mm: introduce bpf_task_is_oom_victim() kfunc bpf: selftests: introduce read_cgroup_file() helper bpf: selftests: BPF OOM struct ops test sched: psi: add a trace point to psi_avgs_work() sched: psi: add cgroup_id field to psi_group structure bpf: allow calling bpf_out_of_memory() from a PSI tracepoint bpf: selftests: PSI struct ops test MAINTAINERS | 2 + include/linux/bpf-cgroup-defs.h | 6 + include/linux/bpf-cgroup.h | 16 ++ include/linux/bpf.h | 10 + include/linux/bpf_oom.h | 46 ++++ include/linux/memcontrol.h | 4 +- include/linux/oom.h | 13 + include/linux/psi_types.h | 4 + include/trace/events/psi.h | 27 ++ include/uapi/linux/bpf.h | 3 + kernel/bpf/bpf_struct_ops.c | 77 +++++- kernel/bpf/cgroup.c | 46 ++++ kernel/bpf/verifier.c | 5 + kernel/sched/psi.c | 7 + mm/Makefile | 2 +- mm/bpf_oom.c | 192 +++++++++++++ mm/memcontrol.c | 2 - mm/oom_kill.c | 202 ++++++++++++++ tools/include/uapi/linux/bpf.h | 1 + tools/lib/bpf/libbpf.c | 22 +- tools/lib/bpf/libbpf.h | 14 + tools/lib/bpf/libbpf.map | 1 + tools/testing/selftests/bpf/cgroup_helpers.c | 45 +++ tools/testing/selftests/bpf/cgroup_helpers.h | 3 + tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/test_oom.c | 256 ++++++++++++++++++ .../selftests/bpf/prog_tests/test_psi.c | 225 +++++++++++++++ tools/testing/selftests/bpf/progs/test_oom.c | 111 ++++++++ tools/testing/selftests/bpf/progs/test_psi.c | 90 ++++++ 29 files changed, 1412 insertions(+), 21 deletions(-) create mode 100644 include/linux/bpf_oom.h create mode 100644 include/trace/events/psi.h create mode 100644 mm/bpf_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_oom.c create mode 100644 tools/testing/selftests/bpf/prog_tests/test_psi.c create mode 100644 tools/testing/selftests/bpf/progs/test_oom.c create mode 100644 tools/testing/selftests/bpf/progs/test_psi.c -- 2.52.0
Michal Hocko <mhocko@suse.com> writes: Yep, good point. Will implement in v4. Thanks!
{ "author": "Roman Gushchin <roman.gushchin@linux.dev>", "date": "Wed, 28 Jan 2026 10:53:20 -0800", "thread_id": "CAADnVQL3+huSAwoYRexoSDaLRK+nEsY6UUnVSmhk_sGYUYsO7Q@mail.gmail.com.mbox.gz" }