source
large_stringclasses
2 values
subject
large_stringclasses
112 values
code
large_stringclasses
112 values
critique
large_stringlengths
61
3.04M
metadata
dict
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Fri, Jan 30, 2026 at 02:17:24PM -0800, Mukesh R wrote: So, are you saying that the hypervisor does not use these pages and only tracks them? That would make things easier. However, if we later try to map a GPA that is already mapped, will the hypervisor return an error? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 2 Feb 2026 08:30:49 -0800", "thread_id": "aYDO1S3DdUSHNkEY@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v5] RISC-V: KVM: Validate SBI STA shmem alignment in kvm_sbi_ext_sta_set_reg()
The RISC-V SBI Steal-Time Accounting (STA) extension requires the shared memory physical address to be 64-byte aligned, and the shared memory size to be at least 64 bytes. KVM exposes the SBI STA shared memory configuration to userspace via KVM_SET_ONE_REG. However, the current implementation of kvm_sbi_ext_sta_set_reg() does not validate the alignment of the configured shared memory address. As a result, userspace can install a misaligned shared memory address that violates the SBI specification. Such an invalid configuration may later reach runtime code paths that assume a valid and properly aligned shared memory region. In particular, KVM_RUN can trigger the following WARN_ON in kvm_riscv_vcpu_record_steal_time(): WARNING: arch/riscv/kvm/vcpu_sbi_sta.c:49 at kvm_riscv_vcpu_record_steal_time WARN_ON paths are not expected to be reachable during normal runtime execution, and may result in a kernel panic when panic_on_warn is enabled. Fix this by validating the shared memory alignment at the KVM_SET_ONE_REG boundary and rejecting misaligned configurations with -EINVAL. The validation is performed on a temporary computed address and only committed to vcpu->arch.sta.shmem once it is known to be valid, similar to the existing logic in kvm_sbi_sta_steal_time_set_shmem() and kvm_sbi_ext_sta_handler(). With this change, invalid userspace state is rejected early and cannot reach runtime code paths that rely on the SBI specification invariants. A reproducer triggering the WARN_ON and the complete kernel log are available at: https://github.com/j1akai/temp/tree/main/20260124 Fixes: f61ce890b1f074 ("RISC-V: KVM: Add support for SBI STA registers") Signed-off-by: Jiakai Xu <xujiakai2025@iscas.ac.cn> Signed-off-by: Jiakai Xu <jiakaiPeanut@gmail.com> --- V4 -> V5: Added parentheses to function name in subject. V3 -> V4: Declared new_shmem at the top of kvm_sbi_ext_sta_set_reg(). Initialized new_shmem to 0 instead of vcpu->arch.sta.shmem. Added blank lines per review feedback. V2 -> V3: Added parentheses to function name in subject. V1 -> V2: Added Fixes tag. --- arch/riscv/kvm/vcpu_sbi_sta.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c index afa0545c3bcfc..bb13aa8eab7ee 100644 --- a/arch/riscv/kvm/vcpu_sbi_sta.c +++ b/arch/riscv/kvm/vcpu_sbi_sta.c @@ -181,6 +181,7 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, unsigned long reg_size, const void *reg_val) { unsigned long value; + gpa_t new_shmem = 0; if (reg_size != sizeof(unsigned long)) return -EINVAL; @@ -191,18 +192,18 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, if (IS_ENABLED(CONFIG_32BIT)) { gpa_t hi = upper_32_bits(vcpu->arch.sta.shmem); - vcpu->arch.sta.shmem = value; - vcpu->arch.sta.shmem |= hi << 32; + new_shmem = value; + new_shmem |= hi << 32; } else { - vcpu->arch.sta.shmem = value; + new_shmem = value; } break; case KVM_REG_RISCV_SBI_STA_REG(shmem_hi): if (IS_ENABLED(CONFIG_32BIT)) { gpa_t lo = lower_32_bits(vcpu->arch.sta.shmem); - vcpu->arch.sta.shmem = ((gpa_t)value << 32); - vcpu->arch.sta.shmem |= lo; + new_shmem = ((gpa_t)value << 32); + new_shmem |= lo; } else if (value != 0) { return -EINVAL; } @@ -211,6 +212,11 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, return -ENOENT; } + if (new_shmem && !IS_ALIGNED(new_shmem, 64)) + return -EINVAL; + + vcpu->arch.sta.shmem = new_shmem; + return 0; } -- 2.34.1
On Mon, Feb 02, 2026 at 12:38:57AM +0000, Jiakai Xu wrote: Any reason not to add these tests to tools/testing/selftests/kvm/steal_time.c in the linux repo? Sorry I missed this on my first review, but new_shmem should be initialized to INVALID_GPA, since zero is a valid gpa. And then here check 'new_shmem != INVALID_GPA' since we want to allow the user to set the "disabled shared memory" value (all-ones). Indeed our testing should confirm that the value is either all-ones (disabled) or a 64-byte aligned address. Thanks, drew
{ "author": "Andrew Jones <andrew.jones@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 10:25:08 -0600", "thread_id": "h5ywmsqp2eysyslvh7zmuiw3mzthkiilgqv4gvjvpl6nejxs7m@ahjmnsz2c2x3.mbox.gz" }
lkml
[PATCH v5] RISC-V: KVM: Validate SBI STA shmem alignment in kvm_sbi_ext_sta_set_reg()
The RISC-V SBI Steal-Time Accounting (STA) extension requires the shared memory physical address to be 64-byte aligned, and the shared memory size to be at least 64 bytes. KVM exposes the SBI STA shared memory configuration to userspace via KVM_SET_ONE_REG. However, the current implementation of kvm_sbi_ext_sta_set_reg() does not validate the alignment of the configured shared memory address. As a result, userspace can install a misaligned shared memory address that violates the SBI specification. Such an invalid configuration may later reach runtime code paths that assume a valid and properly aligned shared memory region. In particular, KVM_RUN can trigger the following WARN_ON in kvm_riscv_vcpu_record_steal_time(): WARNING: arch/riscv/kvm/vcpu_sbi_sta.c:49 at kvm_riscv_vcpu_record_steal_time WARN_ON paths are not expected to be reachable during normal runtime execution, and may result in a kernel panic when panic_on_warn is enabled. Fix this by validating the shared memory alignment at the KVM_SET_ONE_REG boundary and rejecting misaligned configurations with -EINVAL. The validation is performed on a temporary computed address and only committed to vcpu->arch.sta.shmem once it is known to be valid, similar to the existing logic in kvm_sbi_sta_steal_time_set_shmem() and kvm_sbi_ext_sta_handler(). With this change, invalid userspace state is rejected early and cannot reach runtime code paths that rely on the SBI specification invariants. A reproducer triggering the WARN_ON and the complete kernel log are available at: https://github.com/j1akai/temp/tree/main/20260124 Fixes: f61ce890b1f074 ("RISC-V: KVM: Add support for SBI STA registers") Signed-off-by: Jiakai Xu <xujiakai2025@iscas.ac.cn> Signed-off-by: Jiakai Xu <jiakaiPeanut@gmail.com> --- V4 -> V5: Added parentheses to function name in subject. V3 -> V4: Declared new_shmem at the top of kvm_sbi_ext_sta_set_reg(). Initialized new_shmem to 0 instead of vcpu->arch.sta.shmem. Added blank lines per review feedback. V2 -> V3: Added parentheses to function name in subject. V1 -> V2: Added Fixes tag. --- arch/riscv/kvm/vcpu_sbi_sta.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c index afa0545c3bcfc..bb13aa8eab7ee 100644 --- a/arch/riscv/kvm/vcpu_sbi_sta.c +++ b/arch/riscv/kvm/vcpu_sbi_sta.c @@ -181,6 +181,7 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, unsigned long reg_size, const void *reg_val) { unsigned long value; + gpa_t new_shmem = 0; if (reg_size != sizeof(unsigned long)) return -EINVAL; @@ -191,18 +192,18 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, if (IS_ENABLED(CONFIG_32BIT)) { gpa_t hi = upper_32_bits(vcpu->arch.sta.shmem); - vcpu->arch.sta.shmem = value; - vcpu->arch.sta.shmem |= hi << 32; + new_shmem = value; + new_shmem |= hi << 32; } else { - vcpu->arch.sta.shmem = value; + new_shmem = value; } break; case KVM_REG_RISCV_SBI_STA_REG(shmem_hi): if (IS_ENABLED(CONFIG_32BIT)) { gpa_t lo = lower_32_bits(vcpu->arch.sta.shmem); - vcpu->arch.sta.shmem = ((gpa_t)value << 32); - vcpu->arch.sta.shmem |= lo; + new_shmem = ((gpa_t)value << 32); + new_shmem |= lo; } else if (value != 0) { return -EINVAL; } @@ -211,6 +212,11 @@ static int kvm_sbi_ext_sta_set_reg(struct kvm_vcpu *vcpu, unsigned long reg_num, return -ENOENT; } + if (new_shmem && !IS_ALIGNED(new_shmem, 64)) + return -EINVAL; + + vcpu->arch.sta.shmem = new_shmem; + return 0; } -- 2.34.1
On Mon, Feb 02, 2026 at 12:38:57AM +0000, Jiakai Xu wrote: ... A procedure comment is that you don't need to send a new revision for each change as comments come in or as you think of them yourself. You should leave a revision on the list long enough to collect comments from multiple reviewers (including yourself) and then send a new revision with all the changes at once. A couple of days between revisions is a minimum. For more than a single patch (i.e. some longer series) a week would be a minimum. Thanks, drew
{ "author": "Andrew Jones <andrew.jones@oss.qualcomm.com>", "date": "Mon, 2 Feb 2026 10:31:29 -0600", "thread_id": "h5ywmsqp2eysyslvh7zmuiw3mzthkiilgqv4gvjvpl6nejxs7m@ahjmnsz2c2x3.mbox.gz" }
lkml
[GIT PULL] lsm/lsm-pr-20260202
Hi Linus, A small LSM patch to address a regression found in the v6.19-rcX releases where the /proc/sys/vm/mmap_min_addr tunable disappeared when CONFIG_SECURITY was not selected. Long term we plan to work with the MM folks to get the core parts of this moved over to the MM subsystem, but in the meantime we need to fix this regression prior to the v6.19 release. Paul -- The following changes since commit 63804fed149a6750ffd28610c5c1c98cce6bd377: Linux 6.19-rc7 (2026-01-25 14:11:24 -0800) are available in the Git repository at: https://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/lsm.git tags/lsm-pr-20260202 for you to fetch changes up to bdde21d3e77da55121885fd2ef42bc6a15ac2f0c: lsm: preserve /proc/sys/vm/mmap_min_addr when !CONFIG_SECURITY (2026-01-29 13:56:53 -0500) ---------------------------------------------------------------- lsm/stable-6.19 PR 20260202 ---------------------------------------------------------------- Paul Moore (1): lsm: preserve /proc/sys/vm/mmap_min_addr when !CONFIG_SECURITY security/lsm.h | 9 --------- security/lsm_init.c | 7 +------ security/min_addr.c | 5 ++--- 3 files changed, 3 insertions(+), 18 deletions(-) -- paul-moore.com
On Mon, Feb 2, 2026 at 12:37 PM Paul Moore <paul@paul-moore.com> wrote: I forgot to add, you'll notice a forced push on that branch, but that was simply to add some additional reviewed-by/tested-by tags this morning that I thought were worthwhile given we are currently at -rc8. -- paul-moore.com
{ "author": "Paul Moore <paul@paul-moore.com>", "date": "Mon, 2 Feb 2026 12:39:08 -0500", "thread_id": "CAHC9VhR80ZipmG8PGTdfvY-GpUsvX_UzND-XV6s844hbmO3BTw@mail.gmail.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Add verify-only public key crypto support for ML-DSA so that the X.509/PKCS#7 signature verification code, as used by module signing, amongst other things, can make use of it through the common crypto_sig API. Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> cc: Eric Biggers <ebiggers@kernel.org> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Stephan Mueller <smueller@chronox.de> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- crypto/Kconfig | 9 +++ crypto/Makefile | 2 + crypto/mldsa.c | 201 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 212 insertions(+) create mode 100644 crypto/mldsa.c diff --git a/crypto/Kconfig b/crypto/Kconfig index 12a87f7cf150..a210575fa5e0 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -344,6 +344,15 @@ config CRYPTO_ECRDSA One of the Russian cryptographic standard algorithms (called GOST algorithms). Only signature verification is implemented. +config CRYPTO_MLDSA + tristate "ML-DSA (Module-Lattice-Based Digital Signature Algorithm)" + select CRYPTO_SIG + select CRYPTO_LIB_MLDSA + help + ML-DSA (Module-Lattice-Based Digital Signature Algorithm) (FIPS-204). + + Only signature verification is implemented. + endmenu menu "Block ciphers" diff --git a/crypto/Makefile b/crypto/Makefile index 23d3db7be425..267d5403045b 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -60,6 +60,8 @@ ecdsa_generic-y += ecdsa-p1363.o ecdsa_generic-y += ecdsasignature.asn1.o obj-$(CONFIG_CRYPTO_ECDSA) += ecdsa_generic.o +obj-$(CONFIG_CRYPTO_MLDSA) += mldsa.o + crypto_acompress-y := acompress.o crypto_acompress-y += scompress.o obj-$(CONFIG_CRYPTO_ACOMP2) += crypto_acompress.o diff --git a/crypto/mldsa.c b/crypto/mldsa.c new file mode 100644 index 000000000000..d8de082cc67a --- /dev/null +++ b/crypto/mldsa.c @@ -0,0 +1,201 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * crypto_sig wrapper around ML-DSA library. + */ +#include <linux/init.h> +#include <linux/module.h> +#include <crypto/internal/sig.h> +#include <crypto/mldsa.h> + +struct crypto_mldsa_ctx { + u8 pk[MAX(MAX(MLDSA44_PUBLIC_KEY_SIZE, + MLDSA65_PUBLIC_KEY_SIZE), + MLDSA87_PUBLIC_KEY_SIZE)]; + unsigned int pk_len; + enum mldsa_alg strength; + bool key_set; +}; + +static int crypto_mldsa_sign(struct crypto_sig *tfm, + const void *msg, unsigned int msg_len, + void *sig, unsigned int sig_len) +{ + return -EOPNOTSUPP; +} + +static int crypto_mldsa_verify(struct crypto_sig *tfm, + const void *sig, unsigned int sig_len, + const void *msg, unsigned int msg_len) +{ + const struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + + if (unlikely(!ctx->key_set)) + return -EINVAL; + + return mldsa_verify(ctx->strength, sig, sig_len, msg, msg_len, + ctx->pk, ctx->pk_len); +} + +static unsigned int crypto_mldsa_key_size(struct crypto_sig *tfm) +{ + struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + + switch (ctx->strength) { + case MLDSA44: + return MLDSA44_PUBLIC_KEY_SIZE; + case MLDSA65: + return MLDSA65_PUBLIC_KEY_SIZE; + case MLDSA87: + return MLDSA87_PUBLIC_KEY_SIZE; + default: + WARN_ON_ONCE(1); + return 0; + } +} + +static int crypto_mldsa_set_pub_key(struct crypto_sig *tfm, + const void *key, unsigned int keylen) +{ + struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + unsigned int expected_len = crypto_mldsa_key_size(tfm); + + if (keylen != expected_len) + return -EINVAL; + + ctx->pk_len = keylen; + memcpy(ctx->pk, key, keylen); + ctx->key_set = true; + return 0; +} + +static int crypto_mldsa_set_priv_key(struct crypto_sig *tfm, + const void *key, unsigned int keylen) +{ + return -EOPNOTSUPP; +} + +static unsigned int crypto_mldsa_max_size(struct crypto_sig *tfm) +{ + struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + + switch (ctx->strength) { + case MLDSA44: + return MLDSA44_SIGNATURE_SIZE; + case MLDSA65: + return MLDSA65_SIGNATURE_SIZE; + case MLDSA87: + return MLDSA87_SIGNATURE_SIZE; + default: + WARN_ON_ONCE(1); + return 0; + } +} + +static int crypto_mldsa44_alg_init(struct crypto_sig *tfm) +{ + struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + + ctx->strength = MLDSA44; + ctx->key_set = false; + return 0; +} + +static int crypto_mldsa65_alg_init(struct crypto_sig *tfm) +{ + struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + + ctx->strength = MLDSA65; + ctx->key_set = false; + return 0; +} + +static int crypto_mldsa87_alg_init(struct crypto_sig *tfm) +{ + struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm); + + ctx->strength = MLDSA87; + ctx->key_set = false; + return 0; +} + +static void crypto_mldsa_alg_exit(struct crypto_sig *tfm) +{ +} + +static struct sig_alg crypto_mldsa_algs[] = { + { + .sign = crypto_mldsa_sign, + .verify = crypto_mldsa_verify, + .set_pub_key = crypto_mldsa_set_pub_key, + .set_priv_key = crypto_mldsa_set_priv_key, + .key_size = crypto_mldsa_key_size, + .max_size = crypto_mldsa_max_size, + .init = crypto_mldsa44_alg_init, + .exit = crypto_mldsa_alg_exit, + .base.cra_name = "mldsa44", + .base.cra_driver_name = "mldsa44-lib", + .base.cra_ctxsize = sizeof(struct crypto_mldsa_ctx), + .base.cra_module = THIS_MODULE, + .base.cra_priority = 5000, + }, { + .sign = crypto_mldsa_sign, + .verify = crypto_mldsa_verify, + .set_pub_key = crypto_mldsa_set_pub_key, + .set_priv_key = crypto_mldsa_set_priv_key, + .key_size = crypto_mldsa_key_size, + .max_size = crypto_mldsa_max_size, + .init = crypto_mldsa65_alg_init, + .exit = crypto_mldsa_alg_exit, + .base.cra_name = "mldsa65", + .base.cra_driver_name = "mldsa65-lib", + .base.cra_ctxsize = sizeof(struct crypto_mldsa_ctx), + .base.cra_module = THIS_MODULE, + .base.cra_priority = 5000, + }, { + .sign = crypto_mldsa_sign, + .verify = crypto_mldsa_verify, + .set_pub_key = crypto_mldsa_set_pub_key, + .set_priv_key = crypto_mldsa_set_priv_key, + .key_size = crypto_mldsa_key_size, + .max_size = crypto_mldsa_max_size, + .init = crypto_mldsa87_alg_init, + .exit = crypto_mldsa_alg_exit, + .base.cra_name = "mldsa87", + .base.cra_driver_name = "mldsa87-lib", + .base.cra_ctxsize = sizeof(struct crypto_mldsa_ctx), + .base.cra_module = THIS_MODULE, + .base.cra_priority = 5000, + }, +}; + +static int __init mldsa_init(void) +{ + int ret, i; + + for (i = 0; i < ARRAY_SIZE(crypto_mldsa_algs); i++) { + ret = crypto_register_sig(&crypto_mldsa_algs[i]); + if (ret < 0) + goto error; + } + return 0; + +error: + pr_err("Failed to register (%d)\n", ret); + for (i--; i >= 0; i--) + crypto_unregister_sig(&crypto_mldsa_algs[i]); + return ret; +} +module_init(mldsa_init); + +static void mldsa_exit(void) +{ + for (int i = 0; i < ARRAY_SIZE(crypto_mldsa_algs); i++) + crypto_unregister_sig(&crypto_mldsa_algs[i]); +} +module_exit(mldsa_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Crypto API support for ML-DSA signature verification"); +MODULE_ALIAS_CRYPTO("mldsa44"); +MODULE_ALIAS_CRYPTO("mldsa65"); +MODULE_ALIAS_CRYPTO("mldsa87");
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:06 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Calculate the SHA256 hash for blacklisting purposes independently of the signature hash (which may be something other than SHA256). This is necessary because when ML-DSA is used, no digest is calculated. Note that this represents a change of behaviour in that the hash used for the blacklist check would previously have been whatever digest was used for, say, RSA-based signatures. It may be that this is inadvisable. Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Stephan Mueller <smueller@chronox.de> cc: Eric Biggers <ebiggers@kernel.org> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- crypto/asymmetric_keys/x509_parser.h | 2 ++ crypto/asymmetric_keys/x509_public_key.c | 22 +++++++++++++--------- 2 files changed, 15 insertions(+), 9 deletions(-) diff --git a/crypto/asymmetric_keys/x509_parser.h b/crypto/asymmetric_keys/x509_parser.h index 0688c222806b..b7aeebdddb36 100644 --- a/crypto/asymmetric_keys/x509_parser.h +++ b/crypto/asymmetric_keys/x509_parser.h @@ -9,12 +9,14 @@ #include <linux/time.h> #include <crypto/public_key.h> #include <keys/asymmetric-type.h> +#include <crypto/sha2.h> struct x509_certificate { struct x509_certificate *next; struct x509_certificate *signer; /* Certificate that signed this one */ struct public_key *pub; /* Public key details */ struct public_key_signature *sig; /* Signature parameters */ + u8 sha256[SHA256_DIGEST_SIZE]; /* Hash for blacklist purposes */ char *issuer; /* Name of certificate issuer */ char *subject; /* Name of certificate subject */ struct asymmetric_key_id *id; /* Issuer + Serial number */ diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c index 12e3341e806b..79cc7b7a0630 100644 --- a/crypto/asymmetric_keys/x509_public_key.c +++ b/crypto/asymmetric_keys/x509_public_key.c @@ -31,6 +31,19 @@ int x509_get_sig_params(struct x509_certificate *cert) pr_devel("==>%s()\n", __func__); + /* Calculate a SHA256 hash of the TBS and check it against the + * blacklist. + */ + sha256(cert->tbs, cert->tbs_size, cert->sha256); + ret = is_hash_blacklisted(cert->sha256, sizeof(cert->sha256), + BLACKLIST_HASH_X509_TBS); + if (ret == -EKEYREJECTED) { + pr_err("Cert %*phN is blacklisted\n", + (int)sizeof(cert->sha256), cert->sha256); + cert->blacklisted = true; + ret = 0; + } + sig->s = kmemdup(cert->raw_sig, cert->raw_sig_size, GFP_KERNEL); if (!sig->s) return -ENOMEM; @@ -69,15 +82,6 @@ int x509_get_sig_params(struct x509_certificate *cert) if (ret < 0) goto error_2; - ret = is_hash_blacklisted(sig->digest, sig->digest_size, - BLACKLIST_HASH_X509_TBS); - if (ret == -EKEYREJECTED) { - pr_err("Cert %*phN is blacklisted\n", - sig->digest_size, sig->digest); - cert->blacklisted = true; - ret = 0; - } - error_2: kfree(desc); error:
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:07 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Rename ->digest and ->digest_len to ->m and ->m_size to represent the input to the signature verification algorithm, reflecting that ->digest may no longer actually *be* a digest. Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Stephan Mueller <smueller@chronox.de> cc: Eric Biggers <ebiggers@kernel.org> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- crypto/asymmetric_keys/asymmetric_type.c | 4 ++-- crypto/asymmetric_keys/pkcs7_verify.c | 28 ++++++++++++------------ crypto/asymmetric_keys/public_key.c | 3 +-- crypto/asymmetric_keys/signature.c | 2 +- crypto/asymmetric_keys/x509_public_key.c | 10 ++++----- include/crypto/public_key.h | 4 ++-- security/integrity/digsig_asymmetric.c | 4 ++-- 7 files changed, 26 insertions(+), 29 deletions(-) diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c index 348966ea2175..2326743310b1 100644 --- a/crypto/asymmetric_keys/asymmetric_type.c +++ b/crypto/asymmetric_keys/asymmetric_type.c @@ -593,10 +593,10 @@ static int asymmetric_key_verify_signature(struct kernel_pkey_params *params, { struct public_key_signature sig = { .s_size = params->in2_len, - .digest_size = params->in_len, + .m_size = params->in_len, .encoding = params->encoding, .hash_algo = params->hash_algo, - .digest = (void *)in, + .m = (void *)in, .s = (void *)in2, }; diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c index 6d6475e3a9bf..aa085ec6fb1c 100644 --- a/crypto/asymmetric_keys/pkcs7_verify.c +++ b/crypto/asymmetric_keys/pkcs7_verify.c @@ -31,7 +31,7 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, kenter(",%u,%s", sinfo->index, sinfo->sig->hash_algo); /* The digest was calculated already. */ - if (sig->digest) + if (sig->m) return 0; if (!sinfo->sig->hash_algo) @@ -45,11 +45,11 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm); desc_size = crypto_shash_descsize(tfm) + sizeof(*desc); - sig->digest_size = crypto_shash_digestsize(tfm); + sig->m_size = crypto_shash_digestsize(tfm); ret = -ENOMEM; - sig->digest = kmalloc(sig->digest_size, GFP_KERNEL); - if (!sig->digest) + sig->m = kmalloc(sig->m_size, GFP_KERNEL); + if (!sig->m) goto error_no_desc; desc = kzalloc(desc_size, GFP_KERNEL); @@ -59,11 +59,10 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, desc->tfm = tfm; /* Digest the message [RFC2315 9.3] */ - ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len, - sig->digest); + ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len, sig->m); if (ret < 0) goto error; - pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest); + pr_devel("MsgDigest = [%*ph]\n", 8, sig->m); /* However, if there are authenticated attributes, there must be a * message digest attribute amongst them which corresponds to the @@ -78,14 +77,14 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, goto error; } - if (sinfo->msgdigest_len != sig->digest_size) { + if (sinfo->msgdigest_len != sig->m_size) { pr_warn("Sig %u: Invalid digest size (%u)\n", sinfo->index, sinfo->msgdigest_len); ret = -EBADMSG; goto error; } - if (memcmp(sig->digest, sinfo->msgdigest, + if (memcmp(sig->m, sinfo->msgdigest, sinfo->msgdigest_len) != 0) { pr_warn("Sig %u: Message digest doesn't match\n", sinfo->index); @@ -98,7 +97,8 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, * convert the attributes from a CONT.0 into a SET before we * hash it. */ - memset(sig->digest, 0, sig->digest_size); + memset(sig->m, 0, sig->m_size); + ret = crypto_shash_init(desc); if (ret < 0) @@ -108,10 +108,10 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, if (ret < 0) goto error; ret = crypto_shash_finup(desc, sinfo->authattrs, - sinfo->authattrs_len, sig->digest); + sinfo->authattrs_len, sig->m); if (ret < 0) goto error; - pr_devel("AADigest = [%*ph]\n", 8, sig->digest); + pr_devel("AADigest = [%*ph]\n", 8, sig->m); } error: @@ -138,8 +138,8 @@ int pkcs7_get_digest(struct pkcs7_message *pkcs7, const u8 **buf, u32 *len, if (ret) return ret; - *buf = sinfo->sig->digest; - *len = sinfo->sig->digest_size; + *buf = sinfo->sig->m; + *len = sinfo->sig->m_size; i = match_string(hash_algo_name, HASH_ALGO__LAST, sinfo->sig->hash_algo); diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c index e5b177c8e842..a46356e0c08b 100644 --- a/crypto/asymmetric_keys/public_key.c +++ b/crypto/asymmetric_keys/public_key.c @@ -425,8 +425,7 @@ int public_key_verify_signature(const struct public_key *pkey, if (ret) goto error_free_key; - ret = crypto_sig_verify(tfm, sig->s, sig->s_size, - sig->digest, sig->digest_size); + ret = crypto_sig_verify(tfm, sig->s, sig->s_size, sig->m, sig->m_size); error_free_key: kfree_sensitive(key); diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c index 041d04b5c953..f4ec126121b3 100644 --- a/crypto/asymmetric_keys/signature.c +++ b/crypto/asymmetric_keys/signature.c @@ -28,7 +28,7 @@ void public_key_signature_free(struct public_key_signature *sig) for (i = 0; i < ARRAY_SIZE(sig->auth_ids); i++) kfree(sig->auth_ids[i]); kfree(sig->s); - kfree(sig->digest); + kfree(sig->m); kfree(sig); } } diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c index 79cc7b7a0630..3854f7ae4ed0 100644 --- a/crypto/asymmetric_keys/x509_public_key.c +++ b/crypto/asymmetric_keys/x509_public_key.c @@ -63,11 +63,11 @@ int x509_get_sig_params(struct x509_certificate *cert) } desc_size = crypto_shash_descsize(tfm) + sizeof(*desc); - sig->digest_size = crypto_shash_digestsize(tfm); + sig->m_size = crypto_shash_digestsize(tfm); ret = -ENOMEM; - sig->digest = kmalloc(sig->digest_size, GFP_KERNEL); - if (!sig->digest) + sig->m = kmalloc(sig->m_size, GFP_KERNEL); + if (!sig->m) goto error; desc = kzalloc(desc_size, GFP_KERNEL); @@ -76,9 +76,7 @@ int x509_get_sig_params(struct x509_certificate *cert) desc->tfm = tfm; - ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size, - sig->digest); - + ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size, sig->m); if (ret < 0) goto error_2; diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h index 81098e00c08f..bd38ba4d217d 100644 --- a/include/crypto/public_key.h +++ b/include/crypto/public_key.h @@ -43,9 +43,9 @@ extern void public_key_free(struct public_key *key); struct public_key_signature { struct asymmetric_key_id *auth_ids[3]; u8 *s; /* Signature */ - u8 *digest; + u8 *m; /* Message data to pass to verifier */ u32 s_size; /* Number of bytes in signature */ - u32 digest_size; /* Number of bytes in digest */ + u32 m_size; /* Number of bytes in ->m */ const char *pkey_algo; const char *hash_algo; const char *encoding; diff --git a/security/integrity/digsig_asymmetric.c b/security/integrity/digsig_asymmetric.c index 457c0a396caf..87be85f477d1 100644 --- a/security/integrity/digsig_asymmetric.c +++ b/security/integrity/digsig_asymmetric.c @@ -121,8 +121,8 @@ int asymmetric_verify(struct key *keyring, const char *sig, goto out; } - pks.digest = (u8 *)data; - pks.digest_size = datalen; + pks.m = (u8 *)data; + pks.m_size = datalen; pks.s = hdr->sig; pks.s_size = siglen; ret = verify_signature(key, &pks);
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:08 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Allow the data to be verified in a PKCS#7 or CMS message to be passed directly to an asymmetric cipher algorithm (e.g. ML-DSA) if it wants to do whatever passes for hashing/digestion itself. The normal digestion of the data is then skipped as that would be ignored unless another signed info in the message has some other algorithm that needs it. The 'data to be verified' may be the content of the PKCS#7 message or it will be the authenticatedAttributes (signedAttrs if CMS), modified, if those are present. This is done by: (1) Make ->m and ->m_size point to the data to be verified rather than making public_key_verify_signature() access the data directly. This is so that keyctl(KEYCTL_PKEY_VERIFY) will still work. (2) Add a flag, ->algo_takes_data, to indicate that the verification algorithm wants to access the data to be verified directly rather than having it digested first. (3) If the PKCS#7 message has authenticatedAttributes (or CMS signedAttrs), then the digest contained therein will be validated as now, and the modified attrs blob will either be digested or assigned to ->m as appropriate. (4) If present, always copy and modify the authenticatedAttributes (or signedAttrs) then digest that in one go rather than calling the shash update twice (once for the tag and once for the rest). (5) For ML-DSA, point ->m to the TBSCertificate instead of digesting it and using the digest. Note that whilst ML-DSA does allow for an "external mu", CMS doesn't yet have that standardised. Signed-off-by: David Howells <dhowells@redhat.com> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Stephan Mueller <smueller@chronox.de> cc: Eric Biggers <ebiggers@kernel.org> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- crypto/asymmetric_keys/pkcs7_parser.c | 4 +- crypto/asymmetric_keys/pkcs7_verify.c | 52 ++++++++++++++++-------- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_public_key.c | 10 +++++ include/crypto/public_key.h | 2 + 5 files changed, 51 insertions(+), 20 deletions(-) diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c index 423d13c47545..3cdbab3b9f50 100644 --- a/crypto/asymmetric_keys/pkcs7_parser.c +++ b/crypto/asymmetric_keys/pkcs7_parser.c @@ -599,8 +599,8 @@ int pkcs7_sig_note_set_of_authattrs(void *context, size_t hdrlen, } /* We need to switch the 'CONT 0' to a 'SET OF' when we digest */ - sinfo->authattrs = value - (hdrlen - 1); - sinfo->authattrs_len = vlen + (hdrlen - 1); + sinfo->authattrs = value - hdrlen; + sinfo->authattrs_len = vlen + hdrlen; return 0; } diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c index aa085ec6fb1c..06abb9838f95 100644 --- a/crypto/asymmetric_keys/pkcs7_verify.c +++ b/crypto/asymmetric_keys/pkcs7_verify.c @@ -30,6 +30,16 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, kenter(",%u,%s", sinfo->index, sinfo->sig->hash_algo); + if (!sinfo->authattrs && sig->algo_takes_data) { + /* There's no intermediate digest and the signature algo + * doesn't want the data prehashing. + */ + sig->m = (void *)pkcs7->data; + sig->m_size = pkcs7->data_len; + sig->m_free = false; + return 0; + } + /* The digest was calculated already. */ if (sig->m) return 0; @@ -48,9 +58,10 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, sig->m_size = crypto_shash_digestsize(tfm); ret = -ENOMEM; - sig->m = kmalloc(sig->m_size, GFP_KERNEL); + sig->m = kmalloc(umax(sinfo->authattrs_len, sig->m_size), GFP_KERNEL); if (!sig->m) goto error_no_desc; + sig->m_free = true; desc = kzalloc(desc_size, GFP_KERNEL); if (!desc) @@ -69,8 +80,6 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, * digest we just calculated. */ if (sinfo->authattrs) { - u8 tag; - if (!sinfo->msgdigest) { pr_warn("Sig %u: No messageDigest\n", sinfo->index); ret = -EKEYREJECTED; @@ -96,21 +105,25 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, * as the contents of the digest instead. Note that we need to * convert the attributes from a CONT.0 into a SET before we * hash it. + * + * However, for certain algorithms, such as ML-DSA, the digest + * is integrated into the signing algorithm. In such a case, + * we copy the authattrs, modifying the tag type, and set that + * as the digest. */ - memset(sig->m, 0, sig->m_size); - - - ret = crypto_shash_init(desc); - if (ret < 0) - goto error; - tag = ASN1_CONS_BIT | ASN1_SET; - ret = crypto_shash_update(desc, &tag, 1); - if (ret < 0) - goto error; - ret = crypto_shash_finup(desc, sinfo->authattrs, - sinfo->authattrs_len, sig->m); - if (ret < 0) - goto error; + memcpy(sig->m, sinfo->authattrs, sinfo->authattrs_len); + sig->m[0] = ASN1_CONS_BIT | ASN1_SET; + + if (sig->algo_takes_data) { + sig->m_size = sinfo->authattrs_len; + ret = 0; + } else { + ret = crypto_shash_digest(desc, sig->m, + sinfo->authattrs_len, + sig->m); + if (ret < 0) + goto error; + } pr_devel("AADigest = [%*ph]\n", 8, sig->m); } @@ -137,6 +150,11 @@ int pkcs7_get_digest(struct pkcs7_message *pkcs7, const u8 **buf, u32 *len, ret = pkcs7_digest(pkcs7, sinfo); if (ret) return ret; + if (!sinfo->sig->m_free) { + pr_notice_once("%s: No digest available\n", __func__); + return -EINVAL; /* TODO: MLDSA doesn't necessarily calculate an + * intermediate digest. */ + } *buf = sinfo->sig->m; *len = sinfo->sig->m_size; diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c index f4ec126121b3..a5ac7a53b670 100644 --- a/crypto/asymmetric_keys/signature.c +++ b/crypto/asymmetric_keys/signature.c @@ -28,7 +28,8 @@ void public_key_signature_free(struct public_key_signature *sig) for (i = 0; i < ARRAY_SIZE(sig->auth_ids); i++) kfree(sig->auth_ids[i]); kfree(sig->s); - kfree(sig->m); + if (sig->m_free) + kfree(sig->m); kfree(sig); } } diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c index 3854f7ae4ed0..27b4fea37845 100644 --- a/crypto/asymmetric_keys/x509_public_key.c +++ b/crypto/asymmetric_keys/x509_public_key.c @@ -50,6 +50,14 @@ int x509_get_sig_params(struct x509_certificate *cert) sig->s_size = cert->raw_sig_size; + if (sig->algo_takes_data) { + /* The signature algorithm does whatever passes for hashing. */ + sig->m = (u8 *)cert->tbs; + sig->m_size = cert->tbs_size; + sig->m_free = false; + goto out; + } + /* Allocate the hashing algorithm we're going to need and find out how * big the hash operational data will be. */ @@ -69,6 +77,7 @@ int x509_get_sig_params(struct x509_certificate *cert) sig->m = kmalloc(sig->m_size, GFP_KERNEL); if (!sig->m) goto error; + sig->m_free = true; desc = kzalloc(desc_size, GFP_KERNEL); if (!desc) @@ -84,6 +93,7 @@ int x509_get_sig_params(struct x509_certificate *cert) kfree(desc); error: crypto_free_shash(tfm); +out: pr_devel("<==%s() = %d\n", __func__, ret); return ret; } diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h index bd38ba4d217d..4c5199b20338 100644 --- a/include/crypto/public_key.h +++ b/include/crypto/public_key.h @@ -46,6 +46,8 @@ struct public_key_signature { u8 *m; /* Message data to pass to verifier */ u32 s_size; /* Number of bytes in signature */ u32 m_size; /* Number of bytes in ->m */ + bool m_free; /* T if ->m needs freeing */ + bool algo_takes_data; /* T if public key algo operates on data, not a hash */ const char *pkey_algo; const char *hash_algo; const char *encoding;
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:09 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Add support for ML-DSA keys and signatures to the CMS/PKCS#7 and X.509 implementations. ML-DSA-44, -65 and -87 are all supported. For X.509 certificates, the TBSCertificate is required to be signed directly; for CMS, direct signing of the data is preferred, though use of SHA512 (and only that) as an intermediate hash of the content is permitted with signedAttrs. Signed-off-by: David Howells <dhowells@redhat.com> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Stephan Mueller <smueller@chronox.de> cc: Eric Biggers <ebiggers@kernel.org> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- crypto/asymmetric_keys/pkcs7_parser.c | 24 +++++++++++++++++++- crypto/asymmetric_keys/public_key.c | 10 +++++++++ crypto/asymmetric_keys/x509_cert_parser.c | 27 ++++++++++++++++++++++- include/linux/oid_registry.h | 5 +++++ 4 files changed, 64 insertions(+), 2 deletions(-) diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c index 3cdbab3b9f50..594a8f1d9dfb 100644 --- a/crypto/asymmetric_keys/pkcs7_parser.c +++ b/crypto/asymmetric_keys/pkcs7_parser.c @@ -95,11 +95,18 @@ static int pkcs7_check_authattrs(struct pkcs7_message *msg) if (sinfo->authattrs) { want = true; msg->have_authattrs = true; + } else if (sinfo->sig->algo_takes_data) { + sinfo->sig->hash_algo = "none"; } - for (sinfo = sinfo->next; sinfo; sinfo = sinfo->next) + for (sinfo = sinfo->next; sinfo; sinfo = sinfo->next) { if (!!sinfo->authattrs != want) goto inconsistent; + + if (!sinfo->authattrs && + sinfo->sig->algo_takes_data) + sinfo->sig->hash_algo = "none"; + } return 0; inconsistent: @@ -297,6 +304,21 @@ int pkcs7_sig_note_pkey_algo(void *context, size_t hdrlen, ctx->sinfo->sig->pkey_algo = "ecrdsa"; ctx->sinfo->sig->encoding = "raw"; break; + case OID_id_ml_dsa_44: + ctx->sinfo->sig->pkey_algo = "mldsa44"; + ctx->sinfo->sig->encoding = "raw"; + ctx->sinfo->sig->algo_takes_data = true; + break; + case OID_id_ml_dsa_65: + ctx->sinfo->sig->pkey_algo = "mldsa65"; + ctx->sinfo->sig->encoding = "raw"; + ctx->sinfo->sig->algo_takes_data = true; + break; + case OID_id_ml_dsa_87: + ctx->sinfo->sig->pkey_algo = "mldsa87"; + ctx->sinfo->sig->encoding = "raw"; + ctx->sinfo->sig->algo_takes_data = true; + break; default: printk("Unsupported pkey algo: %u\n", ctx->last_oid); return -ENOPKG; diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c index a46356e0c08b..09a0b83d5d77 100644 --- a/crypto/asymmetric_keys/public_key.c +++ b/crypto/asymmetric_keys/public_key.c @@ -142,6 +142,16 @@ software_key_determine_akcipher(const struct public_key *pkey, if (strcmp(hash_algo, "streebog256") != 0 && strcmp(hash_algo, "streebog512") != 0) return -EINVAL; + } else if (strcmp(pkey->pkey_algo, "mldsa44") == 0 || + strcmp(pkey->pkey_algo, "mldsa65") == 0 || + strcmp(pkey->pkey_algo, "mldsa87") == 0) { + if (strcmp(encoding, "raw") != 0) + return -EINVAL; + if (!hash_algo) + return -EINVAL; + if (strcmp(hash_algo, "none") != 0 && + strcmp(hash_algo, "sha512") != 0) + return -EINVAL; } else { /* Unknown public key algorithm */ return -ENOPKG; diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c index b37cae914987..2fe094f5caf3 100644 --- a/crypto/asymmetric_keys/x509_cert_parser.c +++ b/crypto/asymmetric_keys/x509_cert_parser.c @@ -257,6 +257,15 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag, case OID_gost2012Signature512: ctx->cert->sig->hash_algo = "streebog512"; goto ecrdsa; + case OID_id_ml_dsa_44: + ctx->cert->sig->pkey_algo = "mldsa44"; + goto ml_dsa; + case OID_id_ml_dsa_65: + ctx->cert->sig->pkey_algo = "mldsa65"; + goto ml_dsa; + case OID_id_ml_dsa_87: + ctx->cert->sig->pkey_algo = "mldsa87"; + goto ml_dsa; } rsa_pkcs1: @@ -274,6 +283,12 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag, ctx->cert->sig->encoding = "x962"; ctx->sig_algo = ctx->last_oid; return 0; +ml_dsa: + ctx->cert->sig->algo_takes_data = true; + ctx->cert->sig->hash_algo = "none"; + ctx->cert->sig->encoding = "raw"; + ctx->sig_algo = ctx->last_oid; + return 0; } /* @@ -300,7 +315,8 @@ int x509_note_signature(void *context, size_t hdrlen, if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0 || strcmp(ctx->cert->sig->pkey_algo, "ecrdsa") == 0 || - strcmp(ctx->cert->sig->pkey_algo, "ecdsa") == 0) { + strcmp(ctx->cert->sig->pkey_algo, "ecdsa") == 0 || + strncmp(ctx->cert->sig->pkey_algo, "mldsa", 5) == 0) { /* Discard the BIT STRING metadata */ if (vlen < 1 || *(const u8 *)value != 0) return -EBADMSG; @@ -524,6 +540,15 @@ int x509_extract_key_data(void *context, size_t hdrlen, return -ENOPKG; } break; + case OID_id_ml_dsa_44: + ctx->cert->pub->pkey_algo = "mldsa44"; + break; + case OID_id_ml_dsa_65: + ctx->cert->pub->pkey_algo = "mldsa65"; + break; + case OID_id_ml_dsa_87: + ctx->cert->pub->pkey_algo = "mldsa87"; + break; default: return -ENOPKG; } diff --git a/include/linux/oid_registry.h b/include/linux/oid_registry.h index 6de479ebbe5d..ebce402854de 100644 --- a/include/linux/oid_registry.h +++ b/include/linux/oid_registry.h @@ -145,6 +145,11 @@ enum OID { OID_id_rsassa_pkcs1_v1_5_with_sha3_384, /* 2.16.840.1.101.3.4.3.15 */ OID_id_rsassa_pkcs1_v1_5_with_sha3_512, /* 2.16.840.1.101.3.4.3.16 */ + /* NIST FIPS-204 ML-DSA */ + OID_id_ml_dsa_44, /* 2.16.840.1.101.3.4.3.17 */ + OID_id_ml_dsa_65, /* 2.16.840.1.101.3.4.3.18 */ + OID_id_ml_dsa_87, /* 2.16.840.1.101.3.4.3.19 */ + OID__NR };
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:10 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Allow ML-DSA module signing to be enabled. Note that OpenSSL's CMS_*() function suite does not, as of OpenSSL-3.6, support the use of CMS_NOATTR with ML-DSA, so the prohibition against using signedAttrs with module signing has to be removed. The selected digest then applies only to the algorithm used to calculate the digest stored in the messageDigest attribute. The OpenSSL development branch has patches applied that fix this[1], but it appears that that will only be available in OpenSSL-4. [1] https://github.com/openssl/openssl/pull/28923 sign-file won't set CMS_NOATTR if openssl is earlier than v4, resulting in the use of signed attributes. The ML-DSA algorithm takes the raw data to be signed without regard to what digest algorithm is specified in the CMS message. The CMS specified digest algorithm is ignored unless signedAttrs are used; in such a case, only SHA512 is permitted. Signed-off-by: David Howells <dhowells@redhat.com> cc: Jarkko Sakkinen <jarkko@kernel.org> cc: Eric Biggers <ebiggers@kernel.org> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Stephan Mueller <smueller@chronox.de> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- Documentation/admin-guide/module-signing.rst | 16 ++++---- certs/Kconfig | 40 ++++++++++++++++++++ certs/Makefile | 3 ++ scripts/sign-file.c | 39 ++++++++++++++----- 4 files changed, 82 insertions(+), 16 deletions(-) diff --git a/Documentation/admin-guide/module-signing.rst b/Documentation/admin-guide/module-signing.rst index a8667a777490..7f2f127dc76f 100644 --- a/Documentation/admin-guide/module-signing.rst +++ b/Documentation/admin-guide/module-signing.rst @@ -28,10 +28,12 @@ trusted userspace bits. This facility uses X.509 ITU-T standard certificates to encode the public keys involved. The signatures are not themselves encoded in any industrial standard -type. The built-in facility currently only supports the RSA & NIST P-384 ECDSA -public key signing standard (though it is pluggable and permits others to be -used). The possible hash algorithms that can be used are SHA-2 and SHA-3 of -sizes 256, 384, and 512 (the algorithm is selected by data in the signature). +type. The built-in facility currently only supports the RSA, NIST P-384 ECDSA +and NIST FIPS-204 ML-DSA public key signing standards (though it is pluggable +and permits others to be used). For RSA and ECDSA, the possible hash +algorithms that can be used are SHA-2 and SHA-3 of sizes 256, 384, and 512 (the +algorithm is selected by data in the signature); ML-DSA does its own hashing, +but is allowed to be used with a SHA512 hash for signed attributes. ========================== @@ -146,9 +148,9 @@ into vmlinux) using parameters in the:: file (which is also generated if it does not already exist). -One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``) and ECDSA -(``MODULE_SIG_KEY_TYPE_ECDSA``) to generate either RSA 4k or NIST -P-384 keypair. +One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``), ECDSA +(``MODULE_SIG_KEY_TYPE_ECDSA``) and ML-DSA (``MODULE_SIG_KEY_TYPE_MLDSA_*``) to +generate an RSA 4k, a NIST P-384 keypair or an ML-DSA 44, 65 or 87 keypair. It is strongly recommended that you provide your own x509.genkey file. diff --git a/certs/Kconfig b/certs/Kconfig index 78307dc25559..8e39a80c7abe 100644 --- a/certs/Kconfig +++ b/certs/Kconfig @@ -39,6 +39,39 @@ config MODULE_SIG_KEY_TYPE_ECDSA Note: Remove all ECDSA signing keys, e.g. certs/signing_key.pem, when falling back to building Linux 5.14 and older kernels. +config MODULE_SIG_KEY_TYPE_MLDSA_44 + bool "ML-DSA-44" + select CRYPTO_MLDSA + depends on OPENSSL_SUPPORTS_ML_DSA + help + Use an ML-DSA-44 key (NIST FIPS 204) for module signing. ML-DSA + support requires OpenSSL-3.5 minimum; preferably OpenSSL-4+. With + the latter, the entire module body will be signed; with the former, + signedAttrs will be used as it lacks support for CMS_NOATTR with + ML-DSA. + +config MODULE_SIG_KEY_TYPE_MLDSA_65 + bool "ML-DSA-65" + select CRYPTO_MLDSA + depends on OPENSSL_SUPPORTS_ML_DSA + help + Use an ML-DSA-65 key (NIST FIPS 204) for module signing. ML-DSA + support requires OpenSSL-3.5 minimum; preferably OpenSSL-4+. With + the latter, the entire module body will be signed; with the former, + signedAttrs will be used as it lacks support for CMS_NOATTR with + ML-DSA. + +config MODULE_SIG_KEY_TYPE_MLDSA_87 + bool "ML-DSA-87" + select CRYPTO_MLDSA + depends on OPENSSL_SUPPORTS_ML_DSA + help + Use an ML-DSA-87 key (NIST FIPS 204) for module signing. ML-DSA + support requires OpenSSL-3.5 minimum; preferably OpenSSL-4+. With + the latter, the entire module body will be signed; with the former, + signedAttrs will be used as it lacks support for CMS_NOATTR with + ML-DSA. + endchoice config SYSTEM_TRUSTED_KEYRING @@ -154,4 +187,11 @@ config SYSTEM_BLACKLIST_AUTH_UPDATE keyring. The PKCS#7 signature of the description is set in the key payload. Blacklist keys cannot be removed. +config OPENSSL_SUPPORTS_ML_DSA + def_bool $(success, openssl list -key-managers | grep -q ML-DSA-87) + help + Support for ML-DSA-44/65/87 was added in openssl-3.5, so as long + as older versions are supported, the key types may only be + set after testing the installed binary for support. + endmenu diff --git a/certs/Makefile b/certs/Makefile index f6fa4d8d75e0..3ee1960f9f4a 100644 --- a/certs/Makefile +++ b/certs/Makefile @@ -43,6 +43,9 @@ targets += x509_certificate_list ifeq ($(CONFIG_MODULE_SIG_KEY),certs/signing_key.pem) keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_ECDSA) := -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 +keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_MLDSA_44) := -newkey ml-dsa-44 +keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_MLDSA_65) := -newkey ml-dsa-65 +keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_MLDSA_87) := -newkey ml-dsa-87 quiet_cmd_gen_key = GENKEY $@ cmd_gen_key = openssl req -new -nodes -utf8 -$(CONFIG_MODULE_SIG_HASH) -days 36500 \ diff --git a/scripts/sign-file.c b/scripts/sign-file.c index 7070245edfc1..78276b15ab23 100644 --- a/scripts/sign-file.c +++ b/scripts/sign-file.c @@ -27,7 +27,7 @@ #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/err.h> -#if OPENSSL_VERSION_MAJOR >= 3 +#if OPENSSL_VERSION_NUMBER >= 0x30000000L # define USE_PKCS11_PROVIDER # include <openssl/provider.h> # include <openssl/store.h> @@ -315,18 +315,39 @@ int main(int argc, char **argv) ERR(!digest_algo, "EVP_get_digestbyname"); #ifndef USE_PKCS7 + + unsigned int flags = + CMS_NOCERTS | + CMS_PARTIAL | + CMS_BINARY | + CMS_DETACHED | + CMS_STREAM | + CMS_NOSMIMECAP | +#ifdef CMS_NO_SIGNING_TIME + CMS_NO_SIGNING_TIME | +#endif + use_keyid; + +#if OPENSSL_VERSION_NUMBER >= 0x30000000L && OPENSSL_VERSION_NUMBER < 0x40000000L + if (EVP_PKEY_is_a(private_key, "ML-DSA-44") || + EVP_PKEY_is_a(private_key, "ML-DSA-65") || + EVP_PKEY_is_a(private_key, "ML-DSA-87")) { + /* ML-DSA + CMS_NOATTR is not supported in openssl-3.5 + * and before. + */ + use_signed_attrs = 0; + } +#endif + + flags |= use_signed_attrs; + /* Load the signature message from the digest buffer. */ - cms = CMS_sign(NULL, NULL, NULL, NULL, - CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | - CMS_DETACHED | CMS_STREAM); + cms = CMS_sign(NULL, NULL, NULL, NULL, flags); ERR(!cms, "CMS_sign"); - ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo, - CMS_NOCERTS | CMS_BINARY | - CMS_NOSMIMECAP | use_keyid | - use_signed_attrs), + ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo, flags), "CMS_add1_signer"); - ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) != 1, + ERR(CMS_final(cms, bm, NULL, flags) != 1, "CMS_final"); #else
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:11 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
Hi Lukas, Ignat, [Note this is based on Eric Bigger's libcrypto-next branch]. These patches add ML-DSA module signing signing: (1) Add a crypto_sig interface for ML-DSA, verification only. (2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in the blacklist. Direct-sign ML-DSA doesn't generate an easily accessible hash. Note that this changes behaviour as we no longer use whatever hash is specified in the certificate for this. (3) Rename the public_key_signature struct's "digest" and "digest_size" members to "m" and "m_size" to reflect that it's not necessarily a digest, but it is an input to the public key algorithm. (4) Modify PKCS#7 support to allow kernel module signatures to carry authenticatedAttributes as OpenSSL refuses to let them be opted out of for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the process. Modify PKCS#7 to pass the authenticatedAttributes directly to the ML-DSA algorithm rather than passing over a digest as is done with RSA as ML-DSA wants to do its own hashing and will add other stuff into the hash. We could use hashML-DSA or an external mu instead, but they aren't standardised for CMS yet. (5) Add support to the PKCS#7 and X.509 parsers for ML-DSA. (6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with ML-DSA and add ML-DSA to the choice of algorithm with which to sign modules. Note that this might need some more 'select' lines in the Kconfig to select the lib stuff as well. (7) Add a config option to allow authenticatedAttributes to be used with ML-DSA for module signing. Ordinarily, authenticatedAttributes are not permitted for this purpose, however direct signing with ML-DSA will not be supported by OpenSSL until v4 is released. The patches can also be found here: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc David Changes ======= ver #16) - Make the selection of ML-DSA for module signing when configuring contingent on openssl saying it supports ML-DSA (fix from Arnd Bergmann). - Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0. ver #15) - Undo a removed blank line to simplify the X.509 patch. - Split the rename of ->digest to ->m into its own patch. - In pkcs7_digest(), always copy the signedAttrs and modify rather than passing the replacement tag byte in a separate shash update call to the rest of the data. That way the ->m buffer is very likely to be optimally aligned for the crypto. - Only allow authenticatedAttributes with ML-DSA for module signing and only if permission is given in the kernel config. ver #14) - public_key: - Rename public_key::digest to public_key::m. - X.509: - Independently calculate the SHA256 hash for the blacklist check as an ML-DSA-signed X.509 cert doesn't generate a digest we can use. - Point public_key::m at the TBS data for ML-DSA. - PKCS#7: - Allocate a big enough digest buffer rather than reallocating in order to store the authattrs/signedattrs instead. - Merge the two patches that add direct signing support. - ML-DSA: - Use bool instead of u8. - Remove references to SHAKE in Kconfig and mention OpenSSL requirements there. - Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using SHA512 only. - Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA. - RSASSA-PSS: - Allow use with SHA256 and SHA384. - Fix calculation of emBits to be number of bits in the RSA modulus 'n'. - Use strncmp() not memcmp() to avoid reading beyond end of string. - Use correct destructor in rsassa_params_parse(). - Drop this algo for the moment. - Drop the pefile_context::digest_free for now - it's only set to true and is unrelated to public_key::digest_free. ver #13) - Allow a zero-length salt in RSASSA-PSS. - Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS selftest panics when used. - Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp). - Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set). ver #12) - Rebased on Eric's libcrypto-next branch. - Delete references to Dilithium (ML-DSA derived from this). - Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4. - Made it possible to do ML-DSA over the data without signedAttrs. - Made RSASSA-PSS info parser use strsep() and match_token(). - Cleaned the RSASSA-PSS param parsing. - Added limitation on what hashes can be used with what algos. - Moved __free()-marked variables to the point of setting. ver #11) - Rebased on Eric's libcrypto-next branch. - Added RSASSA-PSS support patches. ver #10) - Replaced the Leancrypto ML-DSA implementation with Eric's. - Fixed Eric's implementation to have MODULE_* info. - Added a patch to drive Eric's ML-DSA implementation from crypto_sig. - Removed SHAKE256 from the list of available module hash algorithms. - Changed a some more ML_DSA to MLDSA in config symbols. ver #9) - ML-DSA changes: - Separate output into four modules (1 common, 3 strength-specific). - Solves Kconfig issue with needing to select at least one strength. - Separate the strength-specific crypto-lib APIs. - This is now generated by preprocessor-templating. - Remove the multiplexor code. - Multiplex the crypto-lib APIs by C type. - Fix the PKCS#7/X.509 code to have the correct algo names. ver #8) - Moved the ML-DSA code to lib/crypto/mldsa/. - Renamed some bits from ml-dsa to mldsa. - Created a simplified API and placed that in include/crypto/mldsa.h. - Made the testing code use the simplified API. - Fixed a warning about implicitly casting between uint16_t and __le16. ver #7) - Rebased on Eric's tree as that now contains all the necessary SHA-3 infrastructure and drop the SHA-3 patches from here. - Added a minimal patch to provide shake256 support for crypto_sig. - Got rid of the memory allocation wrappers. - Removed the ML-DSA keypair generation code and the signing code, leaving only the signature verification code. - Removed the secret key handling code. - Removed the secret keys from the kunit tests and the signing testing. - Removed some unused bits from the ML-DSA code. - Downgraded the kdoc comments to ordinary comments, but keep the markup for easier comparison to Leancrypto. ver #6) - Added a patch to make the jitterentropy RNG use lib/sha3. - Added back the crypto/sha3_generic changes. - Added ML-DSA implementation (still needs more cleanup). - Added kunit test for ML-DSA. - Modified PKCS#7 to accommodate ML-DSA. - Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used. - Modified sign-file to not use CMS_NOATTR with ML-DSA. - Allowed SHA3 and SHAKE* algorithms for module signing default. - Allowed ML-DSA-{44,65,87} to be selected as the module signing default. ver #5) - Fix gen-hash-testvecs.py to correctly handle algo names that contain a dash. - Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as these don't currently have HMAC variants implemented. - Fix algo names to be correct. - Fix kunit module description as it now tests all SHA3 variants. ver #4) - Fix a couple of arm64 build problems. - Doc fixes: - Fix the description of the algorithm to be closer to the NIST spec's terminology. - Don't talk of finialising the context for XOFs. - Don't say "Return: None". - Declare the "Context" to be "Any context" and make no mention of the fact that it might use the FPU. - Change "initialise" to "initialize". - Don't warn that the context is relatively large for stack use. - Use size_t for size parameters/variables. - Make the module_exit unconditional. - Dropped the crypto/ dir-affecting patches for the moment. ver #3) - Renamed conflicting arm64 functions. - Made a separate wrapper API for each algorithm in the family. - Removed sha3_init(), sha3_reinit() and sha3_final(). - Removed sha3_ctx::digest_size. - Renamed sha3_ctx::partial to sha3_ctx::absorb_offset. - Refer to the output of SHAKE* as "output" not "digest". - Moved the Iota transform into the one-round function. - Made sha3_update() warn if called after sha3_squeeze(). - Simplified the module-load test to not do update after squeeze. - Added Return: and Context: kdoc statements and expanded the kdoc headers. - Added an API description document. - Overhauled the kunit tests. - Only have one kunit test. - Only call the general hash tester on one algo. - Add separate simple cursory checks for the other algos. - Add resqueezing tests. - Add some NIST example tests. - Changed crypto/sha3_generic to use this - Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr - Folded struct sha3_state into struct sha3_ctx. ver #2) - Simplify the endianness handling. - Rename sha3_final() to sha3_squeeze() and don't clear the context at the end as it's permitted to continue calling sha3_final() to extract continuations of the digest (needed by ML-DSA). - Don't reapply the end marker to the hash state in continuation sha3_squeeze() unless sha3_update() gets called again (needed by ML-DSA). - Give sha3_squeeze() the amount of digest to produce as a parameter rather than using ctx->digest_size and don't return the amount digested. - Reimplement sha3_final() as a wrapper around sha3_squeeze() that extracts ctx->digest_size amount of digest and then zeroes out the context. The latter is necessary to avoid upsetting hash-test-template.h. - Provide a sha3_reinit() function to clear the state, but to leave the parameters that indicate the hash properties unaffected, allowing for reuse. - Provide a sha3_set_digestsize() function to change the size of the digest to be extracted by sha3_final(). sha3_squeeze() takes a parameter for this instead. - Don't pass the digest size as a parameter to shake128/256_init() but rather default to 128/256 bits as per the function name. - Provide a sha3_clear() function to zero out the context. David Howells (7): crypto: Add ML-DSA crypto_sig support x509: Separately calculate sha256 for blacklist pkcs7, x509: Rename ->digest to ->m pkcs7: Allow the signing algo to do whatever digestion it wants itself pkcs7, x509: Add ML-DSA support modsign: Enable ML-DSA module signing pkcs7: Allow authenticatedAttributes for ML-DSA Documentation/admin-guide/module-signing.rst | 16 +- certs/Kconfig | 40 ++++ certs/Makefile | 3 + crypto/Kconfig | 9 + crypto/Makefile | 2 + crypto/asymmetric_keys/Kconfig | 11 + crypto/asymmetric_keys/asymmetric_type.c | 4 +- crypto/asymmetric_keys/pkcs7_parser.c | 36 +++- crypto/asymmetric_keys/pkcs7_parser.h | 3 + crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++--- crypto/asymmetric_keys/public_key.c | 13 +- crypto/asymmetric_keys/signature.c | 3 +- crypto/asymmetric_keys/x509_cert_parser.c | 27 ++- crypto/asymmetric_keys/x509_parser.h | 2 + crypto/asymmetric_keys/x509_public_key.c | 42 ++-- crypto/mldsa.c | 201 +++++++++++++++++++ include/crypto/public_key.h | 6 +- include/linux/oid_registry.h | 5 + scripts/sign-file.c | 39 +++- security/integrity/digsig_asymmetric.c | 4 +- 20 files changed, 473 insertions(+), 71 deletions(-) create mode 100644 crypto/mldsa.c
Allow the rejection of authenticatedAttributes in PKCS#7 (signedAttrs in CMS) to be waived in the kernel config for ML-DSA when used for module signing. This reflects the issue that openssl < 4.0 cannot do this and openssl-4 has not yet been released. This does not permit RSA, ECDSA or ECRDSA to be so waived (behaviour unchanged). Signed-off-by: David Howells <dhowells@redhat.com> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Jarkko Sakkinen <jarkko@kernel.org> cc: Stephan Mueller <smueller@chronox.de> cc: Eric Biggers <ebiggers@kernel.org> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org --- crypto/asymmetric_keys/Kconfig | 11 +++++++++++ crypto/asymmetric_keys/pkcs7_parser.c | 8 ++++++++ crypto/asymmetric_keys/pkcs7_parser.h | 3 +++ crypto/asymmetric_keys/pkcs7_verify.c | 6 ++++++ 4 files changed, 28 insertions(+) diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig index e1345b8f39f1..1dae2232fe9a 100644 --- a/crypto/asymmetric_keys/Kconfig +++ b/crypto/asymmetric_keys/Kconfig @@ -53,6 +53,17 @@ config PKCS7_MESSAGE_PARSER This option provides support for parsing PKCS#7 format messages for signature data and provides the ability to verify the signature. +config PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA + bool "Waive rejection of authenticatedAttributes for ML-DSA" + depends on PKCS7_MESSAGE_PARSER + depends on CRYPTO_MLDSA + help + Due to use of CMS_NOATTR with ML-DSA not being supported in + OpenSSL < 4.0 (and thus any released version), enabling this + allows authenticatedAttributes to be used with ML-DSA for + module signing. Use of authenticatedAttributes in this + context is normally rejected. + config PKCS7_TEST_KEY tristate "PKCS#7 testing key type" depends on SYSTEM_DATA_VERIFICATION diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c index 594a8f1d9dfb..db1c90ca6fc1 100644 --- a/crypto/asymmetric_keys/pkcs7_parser.c +++ b/crypto/asymmetric_keys/pkcs7_parser.c @@ -92,9 +92,17 @@ static int pkcs7_check_authattrs(struct pkcs7_message *msg) if (!sinfo) goto inconsistent; +#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA + msg->authattrs_rej_waivable = true; +#endif + if (sinfo->authattrs) { want = true; msg->have_authattrs = true; +#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA + if (strncmp(sinfo->sig->pkey_algo, "mldsa", 5) != 0) + msg->authattrs_rej_waivable = false; +#endif } else if (sinfo->sig->algo_takes_data) { sinfo->sig->hash_algo = "none"; } diff --git a/crypto/asymmetric_keys/pkcs7_parser.h b/crypto/asymmetric_keys/pkcs7_parser.h index e17f7ce4fb43..6ef9f335bb17 100644 --- a/crypto/asymmetric_keys/pkcs7_parser.h +++ b/crypto/asymmetric_keys/pkcs7_parser.h @@ -55,6 +55,9 @@ struct pkcs7_message { struct pkcs7_signed_info *signed_infos; u8 version; /* Version of cert (1 -> PKCS#7 or CMS; 3 -> CMS) */ bool have_authattrs; /* T if have authattrs */ +#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA + bool authattrs_rej_waivable; /* T if authatts rejection can be waived */ +#endif /* Content Data (or NULL) */ enum OID data_type; /* Type of Data */ diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c index 06abb9838f95..519eecfe6778 100644 --- a/crypto/asymmetric_keys/pkcs7_verify.c +++ b/crypto/asymmetric_keys/pkcs7_verify.c @@ -425,6 +425,12 @@ int pkcs7_verify(struct pkcs7_message *pkcs7, return -EKEYREJECTED; } if (pkcs7->have_authattrs) { +#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA + if (pkcs7->authattrs_rej_waivable) { + pr_warn("Waived invalid module sig (has authattrs)\n"); + break; + } +#endif pr_warn("Invalid module sig (has authattrs)\n"); return -EKEYREJECTED; }
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 2 Feb 2026 17:02:12 +0000", "thread_id": "20260202170216.2467036-1-dhowells@redhat.com.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
Replace direct comparisons of hv_result(status) against HV_STATUS_INSUFFICIENT_MEMORY with a new hv_result_needs_memory() helper function. This improves code readability and provides a consistent and extendable interface for checking out-of-memory conditions in hypercall results. No functional changes intended. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_proc.c | 14 ++++++++++++-- drivers/hv/mshv_root_hv_call.c | 20 ++++++++++---------- drivers/hv/mshv_root_main.c | 2 +- include/asm-generic/mshyperv.h | 3 +++ 4 files changed, 26 insertions(+), 13 deletions(-) diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index fbb4eb3901bb..e53204b9e05d 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -110,6 +110,16 @@ int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) } EXPORT_SYMBOL_GPL(hv_call_deposit_pages); +bool hv_result_needs_memory(u64 status) +{ + switch (hv_result(status)) { + case HV_STATUS_INSUFFICIENT_MEMORY: + return true; + } + return false; +} +EXPORT_SYMBOL_GPL(hv_result_needs_memory); + int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id) { struct hv_input_add_logical_processor *input; @@ -137,7 +147,7 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id) input, output); local_irq_restore(flags); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (!hv_result_success(status)) { hv_status_err(status, "cpu %u apic ID: %u\n", lp_index, apic_id); @@ -179,7 +189,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags) status = hv_do_hypercall(HVCALL_CREATE_VP, input, NULL); local_irq_restore(irq_flags); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (!hv_result_success(status)) { hv_status_err(status, "vcpu: %u, lp: %u\n", vp_index, flags); diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c index 598eaff4ff29..89afeeda21dd 100644 --- a/drivers/hv/mshv_root_hv_call.c +++ b/drivers/hv/mshv_root_hv_call.c @@ -115,7 +115,7 @@ int hv_call_create_partition(u64 flags, status = hv_do_hypercall(HVCALL_CREATE_PARTITION, input, output); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (hv_result_success(status)) *partition_id = output->partition_id; local_irq_restore(irq_flags); @@ -147,7 +147,7 @@ int hv_call_initialize_partition(u64 partition_id) status = hv_do_fast_hypercall8(HVCALL_INITIALIZE_PARTITION, *(u64 *)&input); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); break; } @@ -239,7 +239,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 page_struct_count, completed = hv_repcomp(status); - if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) { + if (hv_result_needs_memory(status)) { ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, HV_MAP_GPA_DEPOSIT_PAGES); if (ret) @@ -455,7 +455,7 @@ int hv_call_get_vp_state(u32 vp_index, u64 partition_id, status = hv_do_hypercall(control, input, output); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (hv_result_success(status) && ret_output) memcpy(ret_output, output, sizeof(*output)); @@ -518,7 +518,7 @@ int hv_call_set_vp_state(u32 vp_index, u64 partition_id, status = hv_do_hypercall(control, input, NULL); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { local_irq_restore(flags); ret = hv_result_to_errno(status); break; @@ -563,7 +563,7 @@ static int hv_call_map_vp_state_page(u64 partition_id, u32 vp_index, u32 type, status = hv_do_hypercall(HVCALL_MAP_VP_STATE_PAGE, input, output); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { if (hv_result_success(status)) *state_page = pfn_to_page(output->map_location); local_irq_restore(flags); @@ -718,7 +718,7 @@ hv_call_create_port(u64 port_partition_id, union hv_port_id port_id, if (hv_result_success(status)) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); break; } @@ -772,7 +772,7 @@ hv_call_connect_port(u64 port_partition_id, union hv_port_id port_id, if (hv_result_success(status)) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); break; } @@ -843,7 +843,7 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type, if (!ret) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { hv_status_debug(status, "\n"); break; } @@ -878,7 +878,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type, pfn = output->map_location; local_irq_restore(flags); - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) { + if (!hv_result_needs_memory(status)) { ret = hv_result_to_errno(status); if (hv_result_success(status)) break; diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index 6a6bf641b352..ee30bfa6bb2e 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -261,7 +261,7 @@ static int mshv_ioctl_passthru_hvcall(struct mshv_partition *partition, if (hv_result_success(status)) break; - if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) + if (!hv_result_needs_memory(status)) ret = hv_result_to_errno(status); else ret = hv_call_deposit_pages(NUMA_NO_NODE, diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index ecedab554c80..452426d5b2ab 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -342,6 +342,8 @@ static inline bool hv_parent_partition(void) { return hv_root_partition() || hv_l1vh_partition(); } + +bool hv_result_needs_memory(u64 status); int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -350,6 +352,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); static inline bool hv_root_partition(void) { return false; } static inline bool hv_l1vh_partition(void) { return false; } static inline bool hv_parent_partition(void) { return false; } +static inline bool hv_result_needs_memory(u64 status) { return false; } static inline int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) { return -EOPNOTSUPP;
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:58:57 +0000", "thread_id": "177005515446.120041.8169777750859263202.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
Introduce hv_deposit_memory_node() and hv_deposit_memory() helper functions to handle memory deposition with proper error handling. The new hv_deposit_memory_node() function takes the hypervisor status as a parameter and validates it before depositing pages. It checks for HV_STATUS_INSUFFICIENT_MEMORY specifically and returns an error for unexpected status codes. This is a precursor patch to new out-of-memory error codes support. No functional changes intended. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_proc.c | 22 ++++++++++++++++++++-- drivers/hv/mshv_root_hv_call.c | 25 +++++++++---------------- drivers/hv/mshv_root_main.c | 3 +-- include/asm-generic/mshyperv.h | 10 ++++++++++ 4 files changed, 40 insertions(+), 20 deletions(-) diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index e53204b9e05d..ffa25cd6e4e9 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -110,6 +110,23 @@ int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) } EXPORT_SYMBOL_GPL(hv_call_deposit_pages); +int hv_deposit_memory_node(int node, u64 partition_id, + u64 hv_status) +{ + u32 num_pages; + + switch (hv_result(hv_status)) { + case HV_STATUS_INSUFFICIENT_MEMORY: + num_pages = 1; + break; + default: + hv_status_err(hv_status, "Unexpected!\n"); + return -ENOMEM; + } + return hv_call_deposit_pages(node, partition_id, num_pages); +} +EXPORT_SYMBOL_GPL(hv_deposit_memory_node); + bool hv_result_needs_memory(u64 status) { switch (hv_result(status)) { @@ -155,7 +172,8 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id) } break; } - ret = hv_call_deposit_pages(node, hv_current_partition_id, 1); + ret = hv_deposit_memory_node(node, hv_current_partition_id, + status); } while (!ret); return ret; @@ -197,7 +215,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags) } break; } - ret = hv_call_deposit_pages(node, partition_id, 1); + ret = hv_deposit_memory_node(node, partition_id, status); } while (!ret); diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c index 89afeeda21dd..174431cb5e0e 100644 --- a/drivers/hv/mshv_root_hv_call.c +++ b/drivers/hv/mshv_root_hv_call.c @@ -123,8 +123,7 @@ int hv_call_create_partition(u64 flags, break; } local_irq_restore(irq_flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, - hv_current_partition_id, 1); + ret = hv_deposit_memory(hv_current_partition_id, status); } while (!ret); return ret; @@ -151,7 +150,7 @@ int hv_call_initialize_partition(u64 partition_id) ret = hv_result_to_errno(status); break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -465,8 +464,7 @@ int hv_call_get_vp_state(u32 vp_index, u64 partition_id, } local_irq_restore(flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, - partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -525,8 +523,7 @@ int hv_call_set_vp_state(u32 vp_index, u64 partition_id, } local_irq_restore(flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, - partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -573,7 +570,7 @@ static int hv_call_map_vp_state_page(u64 partition_id, u32 vp_index, u32 type, local_irq_restore(flags); - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, 1); + ret = hv_deposit_memory(partition_id, status); } while (!ret); return ret; @@ -722,8 +719,7 @@ hv_call_create_port(u64 port_partition_id, union hv_port_id port_id, ret = hv_result_to_errno(status); break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, port_partition_id, 1); - + ret = hv_deposit_memory(port_partition_id, status); } while (!ret); return ret; @@ -776,8 +772,7 @@ hv_call_connect_port(u64 port_partition_id, union hv_port_id port_id, ret = hv_result_to_errno(status); break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, - connection_partition_id, 1); + ret = hv_deposit_memory(connection_partition_id, status); } while (!ret); return ret; @@ -848,8 +843,7 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type, break; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, - hv_current_partition_id, 1); + ret = hv_deposit_memory(hv_current_partition_id, status); } while (!ret); return ret; @@ -885,8 +879,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type, return ret; } - ret = hv_call_deposit_pages(NUMA_NO_NODE, - hv_current_partition_id, 1); + ret = hv_deposit_memory(hv_current_partition_id, status); if (ret) return ret; } while (!ret); diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index ee30bfa6bb2e..dce255c94f9e 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -264,8 +264,7 @@ static int mshv_ioctl_passthru_hvcall(struct mshv_partition *partition, if (!hv_result_needs_memory(status)) ret = hv_result_to_errno(status); else - ret = hv_call_deposit_pages(NUMA_NO_NODE, - pt_id, 1); + ret = hv_deposit_memory(pt_id, status); } while (!ret); args.status = hv_result(status); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 452426d5b2ab..d37b68238c97 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -344,6 +344,7 @@ static inline bool hv_parent_partition(void) } bool hv_result_needs_memory(u64 status); +int hv_deposit_memory_node(int node, u64 partition_id, u64 status); int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -353,6 +354,10 @@ static inline bool hv_root_partition(void) { return false; } static inline bool hv_l1vh_partition(void) { return false; } static inline bool hv_parent_partition(void) { return false; } static inline bool hv_result_needs_memory(u64 status) { return false; } +static inline int hv_deposit_memory_node(int node, u64 partition_id, u64 status) +{ + return -EOPNOTSUPP; +} static inline int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages) { return -EOPNOTSUPP; @@ -367,6 +372,11 @@ static inline int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u3 } #endif /* CONFIG_MSHV_ROOT */ +static inline int hv_deposit_memory(u64 partition_id, u64 status) +{ + return hv_deposit_memory_node(NUMA_NO_NODE, partition_id, status); +} + #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE) u8 __init get_vtl(void); #else
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:59:03 +0000", "thread_id": "177005515446.120041.8169777750859263202.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
The HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY status indicates that the hypervisor lacks sufficient contiguous memory for its internal allocations. When this status is encountered, allocate and deposit HV_MAX_CONTIGUOUS_ALLOCATION_PAGES contiguous pages to the hypervisor. HV_MAX_CONTIGUOUS_ALLOCATION_PAGES is defined in the hypervisor headers, a deposit of this size will always satisfy the hypervisor's requirements. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_common.c | 1 + drivers/hv/hv_proc.c | 4 ++++ include/hyperv/hvgdk_mini.h | 1 + include/hyperv/hvhdk_mini.h | 2 ++ 4 files changed, 8 insertions(+) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 0a3ab7efed46..c7f63c9de503 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -791,6 +791,7 @@ static const struct hv_status_info hv_status_infos[] = { _STATUS_INFO(HV_STATUS_UNKNOWN_PROPERTY, -EIO), _STATUS_INFO(HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE, -EIO), _STATUS_INFO(HV_STATUS_INSUFFICIENT_MEMORY, -ENOMEM), + _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY, -ENOMEM), _STATUS_INFO(HV_STATUS_INVALID_PARTITION_ID, -EINVAL), _STATUS_INFO(HV_STATUS_INVALID_VP_INDEX, -EINVAL), _STATUS_INFO(HV_STATUS_NOT_FOUND, -EIO), diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index ffa25cd6e4e9..dfa27be66ff7 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -119,6 +119,9 @@ int hv_deposit_memory_node(int node, u64 partition_id, case HV_STATUS_INSUFFICIENT_MEMORY: num_pages = 1; break; + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: + num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES; + break; default: hv_status_err(hv_status, "Unexpected!\n"); return -ENOMEM; @@ -131,6 +134,7 @@ bool hv_result_needs_memory(u64 status) { switch (hv_result(status)) { case HV_STATUS_INSUFFICIENT_MEMORY: + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: return true; } return false; diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h index 04b18d0e37af..70f22ef44948 100644 --- a/include/hyperv/hvgdk_mini.h +++ b/include/hyperv/hvgdk_mini.h @@ -38,6 +38,7 @@ struct hv_u128 { #define HV_STATUS_INVALID_LP_INDEX 0x41 #define HV_STATUS_INVALID_REGISTER_VALUE 0x50 #define HV_STATUS_OPERATION_FAILED 0x71 +#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75 #define HV_STATUS_TIME_OUT 0x78 #define HV_STATUS_CALL_PENDING 0x79 #define HV_STATUS_VTL_ALREADY_ENABLED 0x86 diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h index c0300910808b..091c03e26046 100644 --- a/include/hyperv/hvhdk_mini.h +++ b/include/hyperv/hvhdk_mini.h @@ -7,6 +7,8 @@ #include "hvgdk_mini.h" +#define HV_MAX_CONTIGUOUS_ALLOCATION_PAGES 8 + /* * Doorbell connection_info flags. */
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:59:09 +0000", "thread_id": "177005515446.120041.8169777750859263202.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
This series extends the MSHV driver to properly handle additional memory-related error codes from the Microsoft Hypervisor by depositing memory pages when needed. Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY during partition creation, the driver calls hv_call_deposit_pages() to provide the necessary memory. However, there are other memory-related error codes that indicate the hypervisor needs additional memory resources, but the driver does not attempt to deposit pages for these cases. This series introduces a dedicated helper function macro to identify all memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY, HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to deposit pages for all of them via new hv_deposit_memory() helper. With these changes, partition creation becomes more robust by handling all scenarios where the hypervisor requires additional memory deposits. v2: - Rename hv_result_oom() into hv_result_needs_memory() --- Stanislav Kinsburskii (4): mshv: Introduce hv_result_needs_memory() helper function mshv: Introduce hv_deposit_memory helper functions mshv: Handle insufficient contiguous memory hypervisor status mshv: Handle insufficient root memory hypervisor statuses drivers/hv/hv_common.c | 3 ++ drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++--- drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++------------------- drivers/hv/mshv_root_main.c | 5 +--- include/asm-generic/mshyperv.h | 13 +++++++++ include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++------------------- include/hyperv/hvhdk_mini.h | 2 + 7 files changed, 119 insertions(+), 60 deletions(-)
When creating guest partition objects, the hypervisor may fail to allocate root partition pages and return an insufficient memory status. In this case, deposit memory using the root partition ID instead. Note: This error should never occur in a guest of L1VH partition context. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/hv_common.c | 2 + drivers/hv/hv_proc.c | 14 ++++++++++ include/hyperv/hvgdk_mini.h | 58 ++++++++++++++++++++++--------------------- 3 files changed, 46 insertions(+), 28 deletions(-) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index c7f63c9de503..cab0d1733607 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -792,6 +792,8 @@ static const struct hv_status_info hv_status_infos[] = { _STATUS_INFO(HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE, -EIO), _STATUS_INFO(HV_STATUS_INSUFFICIENT_MEMORY, -ENOMEM), _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY, -ENOMEM), + _STATUS_INFO(HV_STATUS_INSUFFICIENT_ROOT_MEMORY, -ENOMEM), + _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY, -ENOMEM), _STATUS_INFO(HV_STATUS_INVALID_PARTITION_ID, -EINVAL), _STATUS_INFO(HV_STATUS_INVALID_VP_INDEX, -EINVAL), _STATUS_INFO(HV_STATUS_NOT_FOUND, -EIO), diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c index dfa27be66ff7..935129e0b39d 100644 --- a/drivers/hv/hv_proc.c +++ b/drivers/hv/hv_proc.c @@ -122,6 +122,18 @@ int hv_deposit_memory_node(int node, u64 partition_id, case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES; break; + + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY: + num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES; + fallthrough; + case HV_STATUS_INSUFFICIENT_ROOT_MEMORY: + if (!hv_root_partition()) { + hv_status_err(hv_status, "Unexpected root memory deposit\n"); + return -ENOMEM; + } + partition_id = HV_PARTITION_ID_SELF; + break; + default: hv_status_err(hv_status, "Unexpected!\n"); return -ENOMEM; @@ -135,6 +147,8 @@ bool hv_result_needs_memory(u64 status) switch (hv_result(status)) { case HV_STATUS_INSUFFICIENT_MEMORY: case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY: + case HV_STATUS_INSUFFICIENT_ROOT_MEMORY: + case HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY: return true; } return false; diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h index 70f22ef44948..5b74a857ef43 100644 --- a/include/hyperv/hvgdk_mini.h +++ b/include/hyperv/hvgdk_mini.h @@ -14,34 +14,36 @@ struct hv_u128 { } __packed; /* NOTE: when adding below, update hv_result_to_string() */ -#define HV_STATUS_SUCCESS 0x0 -#define HV_STATUS_INVALID_HYPERCALL_CODE 0x2 -#define HV_STATUS_INVALID_HYPERCALL_INPUT 0x3 -#define HV_STATUS_INVALID_ALIGNMENT 0x4 -#define HV_STATUS_INVALID_PARAMETER 0x5 -#define HV_STATUS_ACCESS_DENIED 0x6 -#define HV_STATUS_INVALID_PARTITION_STATE 0x7 -#define HV_STATUS_OPERATION_DENIED 0x8 -#define HV_STATUS_UNKNOWN_PROPERTY 0x9 -#define HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE 0xA -#define HV_STATUS_INSUFFICIENT_MEMORY 0xB -#define HV_STATUS_INVALID_PARTITION_ID 0xD -#define HV_STATUS_INVALID_VP_INDEX 0xE -#define HV_STATUS_NOT_FOUND 0x10 -#define HV_STATUS_INVALID_PORT_ID 0x11 -#define HV_STATUS_INVALID_CONNECTION_ID 0x12 -#define HV_STATUS_INSUFFICIENT_BUFFERS 0x13 -#define HV_STATUS_NOT_ACKNOWLEDGED 0x14 -#define HV_STATUS_INVALID_VP_STATE 0x15 -#define HV_STATUS_NO_RESOURCES 0x1D -#define HV_STATUS_PROCESSOR_FEATURE_NOT_SUPPORTED 0x20 -#define HV_STATUS_INVALID_LP_INDEX 0x41 -#define HV_STATUS_INVALID_REGISTER_VALUE 0x50 -#define HV_STATUS_OPERATION_FAILED 0x71 -#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75 -#define HV_STATUS_TIME_OUT 0x78 -#define HV_STATUS_CALL_PENDING 0x79 -#define HV_STATUS_VTL_ALREADY_ENABLED 0x86 +#define HV_STATUS_SUCCESS 0x0 +#define HV_STATUS_INVALID_HYPERCALL_CODE 0x2 +#define HV_STATUS_INVALID_HYPERCALL_INPUT 0x3 +#define HV_STATUS_INVALID_ALIGNMENT 0x4 +#define HV_STATUS_INVALID_PARAMETER 0x5 +#define HV_STATUS_ACCESS_DENIED 0x6 +#define HV_STATUS_INVALID_PARTITION_STATE 0x7 +#define HV_STATUS_OPERATION_DENIED 0x8 +#define HV_STATUS_UNKNOWN_PROPERTY 0x9 +#define HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE 0xA +#define HV_STATUS_INSUFFICIENT_MEMORY 0xB +#define HV_STATUS_INVALID_PARTITION_ID 0xD +#define HV_STATUS_INVALID_VP_INDEX 0xE +#define HV_STATUS_NOT_FOUND 0x10 +#define HV_STATUS_INVALID_PORT_ID 0x11 +#define HV_STATUS_INVALID_CONNECTION_ID 0x12 +#define HV_STATUS_INSUFFICIENT_BUFFERS 0x13 +#define HV_STATUS_NOT_ACKNOWLEDGED 0x14 +#define HV_STATUS_INVALID_VP_STATE 0x15 +#define HV_STATUS_NO_RESOURCES 0x1D +#define HV_STATUS_PROCESSOR_FEATURE_NOT_SUPPORTED 0x20 +#define HV_STATUS_INVALID_LP_INDEX 0x41 +#define HV_STATUS_INVALID_REGISTER_VALUE 0x50 +#define HV_STATUS_OPERATION_FAILED 0x71 +#define HV_STATUS_INSUFFICIENT_ROOT_MEMORY 0x73 +#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75 +#define HV_STATUS_TIME_OUT 0x78 +#define HV_STATUS_CALL_PENDING 0x79 +#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY 0x83 +#define HV_STATUS_VTL_ALREADY_ENABLED 0x86 /* * The Hyper-V TimeRefCount register and the TSC
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Mon, 02 Feb 2026 17:59:14 +0000", "thread_id": "177005515446.120041.8169777750859263202.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in commit f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Commit 16ab7cb5825f ("crypto: pkcs7 - remove sha1 support") previously removed support for reading PKCS#7/CMS signed with SHA-1, along with the ability to use SHA-1 for module signing. This change broke iwd and was subsequently completely reverted in commit 203a6763ab69 ("Revert "crypto: pkcs7 - remove sha1 support""). However, dropping only the support for using SHA-1 for module signing is unrelated and can still be done separately. Note that this change only removes support for new modules to be SHA-1 signed, but already signed modules can still be loaded. Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> --- kernel/module/Kconfig | 5 ----- 1 file changed, 5 deletions(-) diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig index 2a1beebf1d37..be74917802ad 100644 --- a/kernel/module/Kconfig +++ b/kernel/module/Kconfig @@ -299,10 +299,6 @@ choice possible to load a signed module containing the algorithm to check the signature on that module. -config MODULE_SIG_SHA1 - bool "SHA-1" - select CRYPTO_SHA1 - config MODULE_SIG_SHA256 bool "SHA-256" select CRYPTO_SHA256 @@ -332,7 +328,6 @@ endchoice config MODULE_SIG_HASH string depends on MODULE_SIG || IMA_APPRAISE_MODSIG - default "sha1" if MODULE_SIG_SHA1 default "sha256" if MODULE_SIG_SHA256 default "sha384" if MODULE_SIG_SHA384 default "sha512" if MODULE_SIG_SHA512 -- 2.51.1
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Tue, 11 Nov 2025 16:48:31 +0100", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
The PKCS#7 code in sign-file allows for signing only with SHA-1. Since SHA-1 support for module signing has been removed, drop PKCS#7 support in favor of using only CMS. The use of the PKCS#7 code is selected by the following: #if defined(LIBRESSL_VERSION_NUMBER) || \ OPENSSL_VERSION_NUMBER < 0x10000000L || \ defined(OPENSSL_NO_CMS) #define USE_PKCS7 #endif Looking at the individual ifdefs: * LIBRESSL_VERSION_NUMBER: LibreSSL added the CMS implementation from OpenSSL in 3.1.0, making the ifdef no longer relevant. This version was released on April 8, 2020. * OPENSSL_VERSION_NUMBER < 0x10000000L: OpenSSL 1.0.0 was released on March 29, 2010. Supporting earlier versions should no longer be necessary. The file Documentation/process/changes.rst already states that at least version 1.0.0 is required to build the kernel. * OPENSSL_NO_CMS: OpenSSL can be configured with "no-cms" to disable the CMS support. In this case, sign-file will no longer be usable. The CMS support is now required. In practice, since distributions now typically sign modules with SHA-2, for which sign-file already required CMS support, removing PKCS#7 shouldn't cause any issues. Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> --- scripts/sign-file.c | 66 +++------------------------------------------ 1 file changed, 3 insertions(+), 63 deletions(-) diff --git a/scripts/sign-file.c b/scripts/sign-file.c index 7070245edfc1..16f2bf2e1e3c 100644 --- a/scripts/sign-file.c +++ b/scripts/sign-file.c @@ -24,6 +24,7 @@ #include <arpa/inet.h> #include <openssl/opensslv.h> #include <openssl/bio.h> +#include <openssl/cms.h> #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/err.h> @@ -39,29 +40,6 @@ #endif #include "ssl-common.h" -/* - * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to - * assume that it's not available and its header file is missing and that we - * should use PKCS#7 instead. Switching to the older PKCS#7 format restricts - * the options we have on specifying the X.509 certificate we want. - * - * Further, older versions of OpenSSL don't support manually adding signers to - * the PKCS#7 message so have to accept that we get a certificate included in - * the signature message. Nor do such older versions of OpenSSL support - * signing with anything other than SHA1 - so we're stuck with that if such is - * the case. - */ -#if defined(LIBRESSL_VERSION_NUMBER) || \ - OPENSSL_VERSION_NUMBER < 0x10000000L || \ - defined(OPENSSL_NO_CMS) -#define USE_PKCS7 -#endif -#ifndef USE_PKCS7 -#include <openssl/cms.h> -#else -#include <openssl/pkcs7.h> -#endif - struct module_signature { uint8_t algo; /* Public-key crypto algorithm [0] */ uint8_t hash; /* Digest algorithm [0] */ @@ -228,15 +206,10 @@ int main(int argc, char **argv) bool raw_sig = false; unsigned char buf[4096]; unsigned long module_size, sig_size; - unsigned int use_signed_attrs; const EVP_MD *digest_algo; EVP_PKEY *private_key; -#ifndef USE_PKCS7 CMS_ContentInfo *cms = NULL; unsigned int use_keyid = 0; -#else - PKCS7 *pkcs7 = NULL; -#endif X509 *x509; BIO *bd, *bm; int opt, n; @@ -246,21 +219,13 @@ int main(int argc, char **argv) key_pass = getenv("KBUILD_SIGN_PIN"); -#ifndef USE_PKCS7 - use_signed_attrs = CMS_NOATTR; -#else - use_signed_attrs = PKCS7_NOATTR; -#endif - do { opt = getopt(argc, argv, "sdpk"); switch (opt) { case 's': raw_sig = true; break; case 'p': save_sig = true; break; case 'd': sign_only = true; save_sig = true; break; -#ifndef USE_PKCS7 case 'k': use_keyid = CMS_USE_KEYID; break; -#endif case -1: break; default: format(); } @@ -289,14 +254,6 @@ int main(int argc, char **argv) replace_orig = true; } -#ifdef USE_PKCS7 - if (strcmp(hash_algo, "sha1") != 0) { - fprintf(stderr, "sign-file: %s only supports SHA1 signing\n", - OPENSSL_VERSION_TEXT); - exit(3); - } -#endif - /* Open the module file */ bm = BIO_new_file(module_name, "rb"); ERR(!bm, "%s", module_name); @@ -314,7 +271,6 @@ int main(int argc, char **argv) digest_algo = EVP_get_digestbyname(hash_algo); ERR(!digest_algo, "EVP_get_digestbyname"); -#ifndef USE_PKCS7 /* Load the signature message from the digest buffer. */ cms = CMS_sign(NULL, NULL, NULL, NULL, CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | @@ -323,19 +279,12 @@ int main(int argc, char **argv) ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo, CMS_NOCERTS | CMS_BINARY | - CMS_NOSMIMECAP | use_keyid | - use_signed_attrs), + CMS_NOSMIMECAP | CMS_NOATTR | + use_keyid), "CMS_add1_signer"); ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) != 1, "CMS_final"); -#else - pkcs7 = PKCS7_sign(x509, private_key, NULL, bm, - PKCS7_NOCERTS | PKCS7_BINARY | - PKCS7_DETACHED | use_signed_attrs); - ERR(!pkcs7, "PKCS7_sign"); -#endif - if (save_sig) { char *sig_file_name; BIO *b; @@ -344,13 +293,8 @@ int main(int argc, char **argv) "asprintf"); b = BIO_new_file(sig_file_name, "wb"); ERR(!b, "%s", sig_file_name); -#ifndef USE_PKCS7 ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) != 1, "%s", sig_file_name); -#else - ERR(i2d_PKCS7_bio(b, pkcs7) != 1, - "%s", sig_file_name); -#endif BIO_free(b); } @@ -377,11 +321,7 @@ int main(int argc, char **argv) module_size = BIO_number_written(bd); if (!raw_sig) { -#ifndef USE_PKCS7 ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) != 1, "%s", dest_name); -#else - ERR(i2d_PKCS7_bio(bd, pkcs7) != 1, "%s", dest_name); -#endif } else { BIO *b; -- 2.51.1
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Tue, 11 Nov 2025 16:48:32 +0100", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
Hi Petr, On Tue, Nov 11, 2025 at 7:49 AM Petr Pavlu <petr.pavlu@suse.com> wrote: It looks like GKI just uses the defaults here. Overall, Android doesn't rely on module signing for security, it's only used to differentiate between module types. Dropping SHA-1 support sounds like a good idea to me. For the series: Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Sami
{ "author": "Sami Tolvanen <samitolvanen@google.com>", "date": "Tue, 11 Nov 2025 08:22:34 -0800", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Tue, 2025-11-11 at 16:48 +0100, Petr Pavlu wrote: The change log is a bit alarmist. CMS really *is* PKCS7 and most literature will refer to CMS as PKCS7. What you're really deprecating is the use of the PKCS7_sign() API which can only produce SHA-1 Signatures ... openssl is fully capable of producing any hash PKCS7 signatures using a different PKCS7_... API set but the CMS_... API is newer. The point being the module signature type is still set to PKEY_ID_PKCS7 so it doesn't square with the commit log saying "drop PKCS#7 support". What you really mean is only use the openssl CMS_... API for producing PKCS7 signatures. Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Tue, 11 Nov 2025 11:53:34 -0500", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Tue, Nov 11, 2025 at 04:48:31PM +0100, Petr Pavlu wrote: Agreed. Reviewed-by: Aaron Tomlin <atomlin@atomlin.com> -- Aaron Tomlin
{ "author": "Aaron Tomlin <atomlin@atomlin.com>", "date": "Tue, 11 Nov 2025 17:37:28 -0500", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On 11/11/25 5:53 PM, James Bottomley wrote: Ok, I plan to update the description to the following in v2: sign-file: Use only the OpenSSL CMS API for signing The USE_PKCS7 code in sign-file utilizes PKCS7_sign(), which allows signing only with SHA-1. Since SHA-1 support for module signing has been removed, drop the use of the OpenSSL PKCS7 API by the tool in favor of using only the newer CMS API. The use of the PKCS7 API is selected by the following: #if defined(LIBRESSL_VERSION_NUMBER) || \ OPENSSL_VERSION_NUMBER < 0x10000000L || \ defined(OPENSSL_NO_CMS) #define USE_PKCS7 #endif Looking at the individual ifdefs: * LIBRESSL_VERSION_NUMBER: LibreSSL added the CMS API implementation from OpenSSL in 3.1.0, making the ifdef no longer relevant. This version was released on April 8, 2020. * OPENSSL_VERSION_NUMBER < 0x10000000L: OpenSSL 1.0.0 was released on March 29, 2010. Supporting earlier versions should no longer be necessary. The file Documentation/process/changes.rst already states that at least version 1.0.0 is required to build the kernel. * OPENSSL_NO_CMS: OpenSSL can be configured with "no-cms" to disable CMS support. In this case, sign-file will no longer be usable. The CMS API support is now required. In practice, since distributions now typically sign modules with SHA-2, for which sign-file already required CMS API support, removing the USE_PKCS7 code shouldn't cause any issues. -- Thanks, Petr
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Wed, 12 Nov 2025 14:51:24 +0100", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Wed, 2025-11-12 at 14:51 +0100, Petr Pavlu wrote: Much better, thanks! Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Wed, 12 Nov 2025 10:05:57 -0500", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
Petr Pavlu <petr.pavlu@suse.com> wrote: We're looking at moving to ML-DSA, and the CMS support there is slightly dodgy at the moment, so we need to hold off a bit on this change. Patch 1, removing the option to sign with SHA-1 from the kernel is fine, but doesn't stop things that are signed with SHA-1 from being verified. David
{ "author": "David Howells <dhowells@redhat.com>", "date": "Wed, 12 Nov 2025 15:36:57 +0000", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Wed, 2025-11-12 at 15:36 +0000, David Howells wrote: How will removing PKCS7_sign, which can only do sha1 signatures affect that? Is the dodginess that the PKCS7_... API is better than CMS_... for PQS at the moment? In which case we could pretty much do a rip and replace of the CMS_ API if necessary, but that would be a completely separate patch. Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Wed, 12 Nov 2025 10:47:23 -0500", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
James Bottomley <James.Bottomley@HansenPartnership.com> wrote: OpenSSL-3.5.1's ML-DSA support isn't completely right - in particular CMS_NOATTR is not currently supported. I believe there is a fix in the works there, but I doubt it has made it to all the distributions yet. I'm only asking that we hold off a cycle; that will probably suffice. David
{ "author": "David Howells <dhowells@redhat.com>", "date": "Wed, 12 Nov 2025 15:52:40 +0000", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Wed, 2025-11-12 at 15:52 +0000, David Howells wrote: I get that PQC in openssl-3.5 is highly experimental, but that merely means we tell people not to use it for a while. However, what I don't see is how this impacts PKCS7_sign removal. The CMS API can do a sha1 signature if that's what people want and keeping the PKCS7_sign API won't prevent anyone with openssl-3.5 installed from trying a PQ signature. Right but why? Is your thought that we'll have to change the CMS_ code slightly and this might conflict? Regards, James
{ "author": "James Bottomley <James.Bottomley@HansenPartnership.com>", "date": "Wed, 12 Nov 2025 10:58:31 -0500", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Tue, 11 Nov 2025 16:48:30 +0100, Petr Pavlu wrote: Applied to modules-next, thanks! [1/2] module: Remove SHA-1 support for module signing commit: 148519a06304af4e6fbb82f20e1a4480e2c1b126 [2/2] sign-file: Use only the OpenSSL CMS API for signing commit: d7afd65b4acc775df872af30948dd7c196587169 Best regards, Sami
{ "author": "Sami Tolvanen <samitolvanen@google.com>", "date": "Mon, 22 Dec 2025 20:24:17 +0000", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
Here's an alternative patch that will allow PKCS#7 with the hash specified on the command line, removing the SHA1 restriction. David --- sign-file, pkcs7: Honour the hash parameter to sign-file Currently, the sign-file program rejects anything other than "sha1" as the hash parameter if it is going to produce a PKCS#7 message-based signature rather than a CMS message-based signature (though it then ignores this argument and uses whatever is selected as the default which might not be SHA1 and may actually reflect whatever is used to sign the X.509 certificate). Fix sign-file to actually use the specified hash when producing a PKCS#7 message rather than just accepting the default. Fixes: 283e8ba2dfde ("MODSIGN: Change from CMS to PKCS#7 signing if the openssl is too old") Signed-off-by: David Howells <dhowells@redhat.com> cc: Lukas Wunner <lukas@wunner.de> cc: Ignat Korchagin <ignat@cloudflare.com> cc: Jarkko Sakkinen <jarkko@kernel.org> cc: Stephan Mueller <smueller@chronox.de> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: Eric Biggers <ebiggers@kernel.org> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org diff --git a/scripts/sign-file.c b/scripts/sign-file.c index 547b97097230..f0b7e5616b9a 100644 --- a/scripts/sign-file.c +++ b/scripts/sign-file.c @@ -56,6 +56,7 @@ defined(OPENSSL_NO_CMS) #define USE_PKCS7 #endif +#define USE_PKCS7 #ifndef USE_PKCS7 #include <openssl/cms.h> #else @@ -289,14 +290,6 @@ int main(int argc, char **argv) replace_orig = true; } -#ifdef USE_PKCS7 - if (strcmp(hash_algo, "sha1") != 0) { - fprintf(stderr, "sign-file: %s only supports SHA1 signing\n", - OPENSSL_VERSION_TEXT); - exit(3); - } -#endif - /* Open the module file */ bm = BIO_new_file(module_name, "rb"); ERR(!bm, "%s", module_name); @@ -348,10 +341,17 @@ int main(int argc, char **argv) "CMS_final"); #else - pkcs7 = PKCS7_sign(x509, private_key, NULL, bm, - PKCS7_NOCERTS | PKCS7_BINARY | - PKCS7_DETACHED | use_signed_attrs); + unsigned int flags = + PKCS7_NOCERTS | + PKCS7_BINARY | + PKCS7_DETACHED | + use_signed_attrs; + pkcs7 = PKCS7_sign(NULL, NULL, NULL, bm, flags); ERR(!pkcs7, "PKCS7_sign"); + + ERR(!PKCS7_sign_add_signer(pkcs7, x509, private_key, digest_algo, flags), + "PKS7_sign_add_signer"); + ERR(PKCS7_final(pkcs7, bm, flags) != 1, "PKCS7_final"); #endif if (save_sig) {
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 02 Feb 2026 11:24:22 +0000", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
David Howells <dhowells@redhat.com> wrote: Apologies, that line was so I could debug it and should've been removed. David
{ "author": "David Howells <dhowells@redhat.com>", "date": "Mon, 02 Feb 2026 11:27:39 +0000", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On 2/2/26 12:24 PM, David Howells wrote: Is it worth keeping this sign-file code that uses the OpenSSL PKCS7 API instead of having only one variant that uses the newer CMS API? -- Thanks, Petr
{ "author": "Petr Pavlu <petr.pavlu@suse.com>", "date": "Mon, 2 Feb 2026 13:25:06 +0100", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 0/2] module: Remove SHA-1 support for module signing
SHA-1 is considered deprecated and insecure due to vulnerabilities that can lead to hash collisions. Most distributions have already been using SHA-2 for module signing because of this. The default was also changed last year from SHA-1 to SHA-512 in f3b93547b91a ("module: sign with sha512 instead of sha1 by default"). This was not reported to cause any issues. Therefore, it now seems to be a good time to remove SHA-1 support for module signing. Looking at the configs of several distributions [1], it seems only Android still uses SHA-1 for module signing. @Sami, it this correct and is there a specific reason for using SHA-1? Note: The second patch has a minor conflict with the sign-file update in the series "lib/crypto: Add ML-DSA signing" [2]. [1] https://oracle.github.io/kconfigs/?config=UTS_RELEASE&config=MODULE_SIG_SHA1&version=be8f5f6abf0b0979be20ee8d9afa2a49a13500b8 [2] https://lore.kernel.org/linux-crypto/61637.1762509938@warthog.procyon.org.uk/ Petr Pavlu (2): module: Remove SHA-1 support for module signing sign-file: Remove support for signing with PKCS#7 kernel/module/Kconfig | 5 ---- scripts/sign-file.c | 66 ++----------------------------------------- 2 files changed, 3 insertions(+), 68 deletions(-) base-commit: 4427259cc7f7571a157fbc9b5011e1ef6fe0a4a8 -- 2.51.1
On Mon, Feb 2, 2026 at 4:25 AM Petr Pavlu <petr.pavlu@suse.com> wrote: I agree that keeping only the CMS variant makes more sense. However, David, please let me know if you'd prefer that I drop the patch removing PKCS7 support from sign-file for now. I assumed you had no further objections since the discussion in the other sub-thread tapered off, but perhaps I misread that. Sami
{ "author": "Sami Tolvanen <samitolvanen@google.com>", "date": "Mon, 2 Feb 2026 09:01:19 -0800", "thread_id": "CABCJKucAkZa10TYRQ+NxPPw3KaTq4QVk5+XZWyCPpSrpMR_GEg@mail.gmail.com.mbox.gz" }
lkml
[PATCH 5.15.y 1/3] wifi: cfg80211: add a work abstraction with special semantics
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit a3ee4dc84c4e9d14cb34dad095fd678127aca5b6 ] Add a work abstraction at the cfg80211 level that will always hold the wiphy_lock() for any work executed and therefore also can be canceled safely (without waiting) while holding that. This improves on what we do now as with the new wiphy works we don't have to worry about locking while cancelling them safely. Also, don't let such works run while the device is suspended, since they'll likely need to interact with the device. Flush them before suspend though. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- include/net/cfg80211.h | 95 ++++++++++++++++++++++++++++++-- net/wireless/core.c | 121 +++++++++++++++++++++++++++++++++++++++++ net/wireless/core.h | 7 +++ net/wireless/sysfs.c | 8 ++- 4 files changed, 226 insertions(+), 5 deletions(-) diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 66a75723f559..392576342661 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -5301,12 +5301,17 @@ struct cfg80211_cqm_config; * wiphy_lock - lock the wiphy * @wiphy: the wiphy to lock * - * This is mostly exposed so it can be done around registering and - * unregistering netdevs that aren't created through cfg80211 calls, - * since that requires locking in cfg80211 when the notifiers is - * called, but that cannot differentiate which way it's called. + * This is needed around registering and unregistering netdevs that + * aren't created through cfg80211 calls, since that requires locking + * in cfg80211 when the notifiers is called, but that cannot + * differentiate which way it's called. + * + * It can also be used by drivers for their own purposes. * * When cfg80211 ops are called, the wiphy is already locked. + * + * Note that this makes sure that no workers that have been queued + * with wiphy_queue_work() are running. */ static inline void wiphy_lock(struct wiphy *wiphy) __acquires(&wiphy->mtx) @@ -5326,6 +5331,88 @@ static inline void wiphy_unlock(struct wiphy *wiphy) mutex_unlock(&wiphy->mtx); } +struct wiphy_work; +typedef void (*wiphy_work_func_t)(struct wiphy *, struct wiphy_work *); + +struct wiphy_work { + struct list_head entry; + wiphy_work_func_t func; +}; + +static inline void wiphy_work_init(struct wiphy_work *work, + wiphy_work_func_t func) +{ + INIT_LIST_HEAD(&work->entry); + work->func = func; +} + +/** + * wiphy_work_queue - queue work for the wiphy + * @wiphy: the wiphy to queue for + * @work: the work item + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work); + +/** + * wiphy_work_cancel - cancel previously queued work + * @wiphy: the wiphy, for debug purposes + * @work: the work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work); + +struct wiphy_delayed_work { + struct wiphy_work work; + struct wiphy *wiphy; + struct timer_list timer; +}; + +void wiphy_delayed_work_timer(struct timer_list *t); + +static inline void wiphy_delayed_work_init(struct wiphy_delayed_work *dwork, + wiphy_work_func_t func) +{ + timer_setup(&dwork->timer, wiphy_delayed_work_timer, 0); + wiphy_work_init(&dwork->work, func); +} + +/** + * wiphy_delayed_work_queue - queue delayed work for the wiphy + * @wiphy: the wiphy to queue for + * @dwork: the delayable worker + * @delay: number of jiffies to wait before queueing + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay); + +/** + * wiphy_delayed_work_cancel - cancel previously queued delayed work + * @wiphy: the wiphy, for debug purposes + * @dwork: the delayed work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork); + /** * struct wireless_dev - wireless device state * diff --git a/net/wireless/core.c b/net/wireless/core.c index d51d27ff3729..788ca1055d6a 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -410,6 +410,34 @@ static void cfg80211_propagate_cac_done_wk(struct work_struct *work) rtnl_unlock(); } +static void cfg80211_wiphy_work(struct work_struct *work) +{ + struct cfg80211_registered_device *rdev; + struct wiphy_work *wk; + + rdev = container_of(work, struct cfg80211_registered_device, wiphy_work); + + wiphy_lock(&rdev->wiphy); + if (rdev->suspended) + goto out; + + spin_lock_irq(&rdev->wiphy_work_lock); + wk = list_first_entry_or_null(&rdev->wiphy_work_list, + struct wiphy_work, entry); + if (wk) { + list_del_init(&wk->entry); + if (!list_empty(&rdev->wiphy_work_list)) + schedule_work(work); + spin_unlock_irq(&rdev->wiphy_work_lock); + + wk->func(&rdev->wiphy, wk); + } else { + spin_unlock_irq(&rdev->wiphy_work_lock); + } +out: + wiphy_unlock(&rdev->wiphy); +} + /* exported functions */ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, @@ -535,6 +563,9 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, return NULL; } + INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work); + INIT_LIST_HEAD(&rdev->wiphy_work_list); + spin_lock_init(&rdev->wiphy_work_lock); INIT_WORK(&rdev->rfkill_block, cfg80211_rfkill_block_work); INIT_WORK(&rdev->conn_work, cfg80211_conn_work); INIT_WORK(&rdev->event_work, cfg80211_event_work); @@ -1002,6 +1033,31 @@ void wiphy_rfkill_start_polling(struct wiphy *wiphy) } EXPORT_SYMBOL(wiphy_rfkill_start_polling); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev) +{ + unsigned int runaway_limit = 100; + unsigned long flags; + + lockdep_assert_held(&rdev->wiphy.mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + while (!list_empty(&rdev->wiphy_work_list)) { + struct wiphy_work *wk; + + wk = list_first_entry(&rdev->wiphy_work_list, + struct wiphy_work, entry); + list_del_init(&wk->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + wk->func(&rdev->wiphy, wk); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (WARN_ON(--runaway_limit == 0)) + INIT_LIST_HEAD(&rdev->wiphy_work_list); + } + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} + void wiphy_unregister(struct wiphy *wiphy) { struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); @@ -1040,9 +1096,14 @@ void wiphy_unregister(struct wiphy *wiphy) cfg80211_rdev_list_generation++; device_del(&rdev->wiphy.dev); + /* surely nothing is reachable now, clean up work */ + cfg80211_process_wiphy_works(rdev); wiphy_unlock(&rdev->wiphy); rtnl_unlock(); + /* this has nothing to do now but make sure it's gone */ + cancel_work_sync(&rdev->wiphy_work); + flush_work(&rdev->scan_done_wk); cancel_work_sync(&rdev->conn_work); flush_work(&rdev->event_work); @@ -1522,6 +1583,66 @@ static struct pernet_operations cfg80211_pernet_ops = { .exit = cfg80211_pernet_exit, }; +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (list_empty(&work->entry)) + list_add_tail(&work->entry, &rdev->wiphy_work_list); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + schedule_work(&rdev->wiphy_work); +} +EXPORT_SYMBOL_GPL(wiphy_work_queue); + +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + lockdep_assert_held(&wiphy->mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (!list_empty(&work->entry)) + list_del_init(&work->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} +EXPORT_SYMBOL_GPL(wiphy_work_cancel); + +void wiphy_delayed_work_timer(struct timer_list *t) +{ + struct wiphy_delayed_work *dwork = from_timer(dwork, t, timer); + + wiphy_work_queue(dwork->wiphy, &dwork->work); +} +EXPORT_SYMBOL(wiphy_delayed_work_timer); + +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay) +{ + if (!delay) { + wiphy_work_queue(wiphy, &dwork->work); + return; + } + + dwork->wiphy = wiphy; + mod_timer(&dwork->timer, jiffies + delay); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_queue); + +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork) +{ + lockdep_assert_held(&wiphy->mtx); + + del_timer_sync(&dwork->timer); + wiphy_work_cancel(wiphy, &dwork->work); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_cancel); + static int __init cfg80211_init(void) { int err; diff --git a/net/wireless/core.h b/net/wireless/core.h index 1720abf36f92..18d30f6fa7ca 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -103,6 +103,12 @@ struct cfg80211_registered_device { /* lock for all wdev lists */ spinlock_t mgmt_registrations_lock; + struct work_struct wiphy_work; + struct list_head wiphy_work_list; + /* protects the list above */ + spinlock_t wiphy_work_lock; + bool suspended; + /* must be last because of the way we do wiphy_priv(), * and it should at least be aligned to NETDEV_ALIGN */ struct wiphy wiphy __aligned(NETDEV_ALIGN); @@ -457,6 +463,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, struct net_device *dev, enum nl80211_iftype ntype, struct vif_params *params); void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev); void cfg80211_process_wdev_events(struct wireless_dev *wdev); bool cfg80211_does_bw_fit_range(const struct ieee80211_freq_range *freq_range, diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c index 0c3f05c9be27..4d3b65803010 100644 --- a/net/wireless/sysfs.c +++ b/net/wireless/sysfs.c @@ -5,7 +5,7 @@ * * Copyright 2005-2006 Jiri Benc <jbenc@suse.cz> * Copyright 2006 Johannes Berg <johannes@sipsolutions.net> - * Copyright (C) 2020-2021 Intel Corporation + * Copyright (C) 2020-2021, 2023 Intel Corporation */ #include <linux/device.h> @@ -105,14 +105,18 @@ static int wiphy_suspend(struct device *dev) cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); } + cfg80211_process_wiphy_works(rdev); if (rdev->ops->suspend) ret = rdev_suspend(rdev, rdev->wiphy.wowlan_config); if (ret == 1) { /* Driver refuse to configure wowlan */ cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); + cfg80211_process_wiphy_works(rdev); ret = rdev_suspend(rdev, NULL); } + if (ret == 0) + rdev->suspended = true; } wiphy_unlock(&rdev->wiphy); rtnl_unlock(); @@ -132,6 +136,8 @@ static int wiphy_resume(struct device *dev) wiphy_lock(&rdev->wiphy); if (rdev->wiphy.registered && rdev->ops->resume) ret = rdev_resume(rdev); + rdev->suspended = false; + schedule_work(&rdev->wiphy_work); wiphy_unlock(&rdev->wiphy); if (ret) -- 2.53.0.rc2.2.g2258446484
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit 16114496d684a3df4ce09f7c6b7557a8b2922795 ] We'll need this later to convert other works that might be cancelled from here, so convert this one first. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- net/mac80211/ibss.c | 8 ++++---- net/mac80211/ieee80211_i.h | 2 +- net/mac80211/iface.c | 10 +++++----- net/mac80211/mesh.c | 10 +++++----- net/mac80211/mesh_hwmp.c | 6 +++--- net/mac80211/mlme.c | 6 +++--- net/mac80211/ocb.c | 6 +++--- net/mac80211/rx.c | 2 +- net/mac80211/scan.c | 2 +- net/mac80211/status.c | 5 +++-- net/mac80211/util.c | 2 +- 11 files changed, 30 insertions(+), 29 deletions(-) diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c index 48e0260f3424..ce927c16a915 100644 --- a/net/mac80211/ibss.c +++ b/net/mac80211/ibss.c @@ -746,7 +746,7 @@ static void ieee80211_csa_connection_drop_work(struct work_struct *work) skb_queue_purge(&sdata->skb_queue); /* trigger a scan to find another IBSS network to join */ - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); sdata_unlock(sdata); } @@ -1245,7 +1245,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata, spin_lock(&ifibss->incomplete_lock); list_add(&sta->list, &ifibss->incomplete_stations); spin_unlock(&ifibss->incomplete_lock); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } static void ieee80211_ibss_sta_expire(struct ieee80211_sub_if_data *sdata) @@ -1726,7 +1726,7 @@ static void ieee80211_ibss_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.ibss.timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_ibss_setup_sdata(struct ieee80211_sub_if_data *sdata) @@ -1861,7 +1861,7 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata, sdata->needed_rx_chains = local->rx_chains; sdata->control_port_over_nl80211 = params->control_port_over_nl80211; - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); return 0; } diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 3b5350cfc0ee..8d6616f646e7 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -966,7 +966,7 @@ struct ieee80211_sub_if_data { /* used to reconfigure hardware SM PS */ struct work_struct recalc_smps; - struct work_struct work; + struct wiphy_work work; struct sk_buff_head skb_queue; struct sk_buff_head status_queue; diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c index e437bcadf4a2..eb7de2d455e1 100644 --- a/net/mac80211/iface.c +++ b/net/mac80211/iface.c @@ -43,7 +43,7 @@ * by either the RTNL, the iflist_mtx or RCU. */ -static void ieee80211_iface_work(struct work_struct *work); +static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work); bool __ieee80211_recalc_txpower(struct ieee80211_sub_if_data *sdata) { @@ -539,7 +539,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do RCU_INIT_POINTER(local->p2p_sdata, NULL); fallthrough; default: - cancel_work_sync(&sdata->work); + wiphy_work_cancel(sdata->local->hw.wiphy, &sdata->work); /* * When we get here, the interface is marked down. * Free the remaining keys, if there are any @@ -1005,7 +1005,7 @@ int ieee80211_add_virtual_monitor(struct ieee80211_local *local) skb_queue_head_init(&sdata->skb_queue); skb_queue_head_init(&sdata->status_queue); - INIT_WORK(&sdata->work, ieee80211_iface_work); + wiphy_work_init(&sdata->work, ieee80211_iface_work); return 0; } @@ -1487,7 +1487,7 @@ static void ieee80211_iface_process_status(struct ieee80211_sub_if_data *sdata, } } -static void ieee80211_iface_work(struct work_struct *work) +static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work) { struct ieee80211_sub_if_data *sdata = container_of(work, struct ieee80211_sub_if_data, work); @@ -1590,7 +1590,7 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata, skb_queue_head_init(&sdata->skb_queue); skb_queue_head_init(&sdata->status_queue); - INIT_WORK(&sdata->work, ieee80211_iface_work); + wiphy_work_init(&sdata->work, ieee80211_iface_work); INIT_WORK(&sdata->recalc_smps, ieee80211_recalc_smps_work); INIT_WORK(&sdata->csa_finalize_work, ieee80211_csa_finalize_work); INIT_WORK(&sdata->color_change_finalize_work, ieee80211_color_change_finalize_work); diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c index 6202157f467b..2f888cbe6e2b 100644 --- a/net/mac80211/mesh.c +++ b/net/mac80211/mesh.c @@ -44,7 +44,7 @@ static void ieee80211_mesh_housekeeping_timer(struct timer_list *t) set_bit(MESH_WORK_HOUSEKEEPING, &ifmsh->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } /** @@ -642,7 +642,7 @@ static void ieee80211_mesh_path_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.mesh.mesh_path_timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } static void ieee80211_mesh_path_root_timer(struct timer_list *t) @@ -653,7 +653,7 @@ static void ieee80211_mesh_path_root_timer(struct timer_list *t) set_bit(MESH_WORK_ROOT, &ifmsh->wrkq_flags); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_mesh_root_setup(struct ieee80211_if_mesh *ifmsh) @@ -1018,7 +1018,7 @@ void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata, for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE) set_bit(bit, &ifmsh->mbss_changed); set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) @@ -1043,7 +1043,7 @@ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) ifmsh->sync_offset_clockdrift_max = 0; set_bit(MESH_WORK_HOUSEKEEPING, &ifmsh->wrkq_flags); ieee80211_mesh_root_setup(ifmsh); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); sdata->vif.bss_conf.ht_operation_mode = ifmsh->mshcfg.ht_opmode; sdata->vif.bss_conf.enable_beacon = true; diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c index 8bf238afb544..a3522b21803f 100644 --- a/net/mac80211/mesh_hwmp.c +++ b/net/mac80211/mesh_hwmp.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (c) 2008, 2009 open80211s Ltd. - * Copyright (C) 2019, 2021 Intel Corporation + * Copyright (C) 2019, 2021-2023 Intel Corporation * Author: Luis Carlos Cobo <luisca@cozybit.com> */ @@ -1020,14 +1020,14 @@ static void mesh_queue_preq(struct mesh_path *mpath, u8 flags) spin_unlock_bh(&ifmsh->mesh_preq_queue_lock); if (time_after(jiffies, ifmsh->last_preq + min_preq_int_jiff(sdata))) - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); else if (time_before(jiffies, ifmsh->last_preq)) { /* avoid long wait if did not send preqs for a long time * and jiffies wrapped around */ ifmsh->last_preq = jiffies - min_preq_int_jiff(sdata) - 1; - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } else mod_timer(&ifmsh->mesh_path_timer, ifmsh->last_preq + min_preq_int_jiff(sdata)); diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 6e86a23c647d..d147760e8389 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -2509,7 +2509,7 @@ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata, sdata->u.mgd.probe_send_count = 0; else sdata->u.mgd.nullfunc_failed = true; - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } static void ieee80211_mlme_send_probe_req(struct ieee80211_sub_if_data *sdata, @@ -4415,7 +4415,7 @@ static void ieee80211_sta_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.mgd.timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_sta_connection_lost(struct ieee80211_sub_if_data *sdata, @@ -4559,7 +4559,7 @@ void ieee80211_mgd_conn_tx_status(struct ieee80211_sub_if_data *sdata, sdata->u.mgd.status_acked = acked; sdata->u.mgd.status_received = true; - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata) diff --git a/net/mac80211/ocb.c b/net/mac80211/ocb.c index 7c1a735b9eee..9713e53f11b1 100644 --- a/net/mac80211/ocb.c +++ b/net/mac80211/ocb.c @@ -80,7 +80,7 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata, spin_lock(&ifocb->incomplete_lock); list_add(&sta->list, &ifocb->incomplete_stations); spin_unlock(&ifocb->incomplete_lock); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } static struct sta_info *ieee80211_ocb_finish_sta(struct sta_info *sta) @@ -156,7 +156,7 @@ static void ieee80211_ocb_housekeeping_timer(struct timer_list *t) set_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } void ieee80211_ocb_setup_sdata(struct ieee80211_sub_if_data *sdata) @@ -196,7 +196,7 @@ int ieee80211_ocb_join(struct ieee80211_sub_if_data *sdata, ifocb->joined = true; set_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); netif_carrier_on(sdata->dev); return 0; diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 1c1660160787..15933e9abc9b 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -219,7 +219,7 @@ static void __ieee80211_queue_skb_to_iface(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb) { skb_queue_tail(&sdata->skb_queue, skb); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); if (sta) sta->rx_stats.packets++; } diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index 3bf3dd4bafa5..fd77c707e65c 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -498,7 +498,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted) */ list_for_each_entry_rcu(sdata, &local->interfaces, list) { if (ieee80211_sdata_running(sdata)) - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } if (was_scanning) diff --git a/net/mac80211/status.c b/net/mac80211/status.c index f6f63a0b1b72..017ea2d2f36f 100644 --- a/net/mac80211/status.c +++ b/net/mac80211/status.c @@ -5,6 +5,7 @@ * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> * Copyright 2008-2010 Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright 2021-2023 Intel Corporation */ #include <linux/export.h> @@ -716,8 +717,8 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local, if (qskb) { skb_queue_tail(&sdata->status_queue, qskb); - ieee80211_queue_work(&local->hw, - &sdata->work); + wiphy_work_queue(local->hw.wiphy, + &sdata->work); } } } else { diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 07512f0d5576..5b1799dfa675 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -2679,7 +2679,7 @@ int ieee80211_reconfig(struct ieee80211_local *local) /* Requeue all works */ list_for_each_entry(sdata, &local->interfaces, list) - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, -- 2.53.0.rc2.2.g2258446484
{ "author": "=?UTF-8?q?Hanne-Lotta=20M=C3=A4enp=C3=A4=C3=A4?= <hannelotta@gmail.com>", "date": "Mon, 2 Feb 2026 18:50:37 +0200", "thread_id": "20260202165038.215693-1-hannelotta@gmail.com.mbox.gz" }
lkml
[PATCH 5.15.y 1/3] wifi: cfg80211: add a work abstraction with special semantics
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit a3ee4dc84c4e9d14cb34dad095fd678127aca5b6 ] Add a work abstraction at the cfg80211 level that will always hold the wiphy_lock() for any work executed and therefore also can be canceled safely (without waiting) while holding that. This improves on what we do now as with the new wiphy works we don't have to worry about locking while cancelling them safely. Also, don't let such works run while the device is suspended, since they'll likely need to interact with the device. Flush them before suspend though. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- include/net/cfg80211.h | 95 ++++++++++++++++++++++++++++++-- net/wireless/core.c | 121 +++++++++++++++++++++++++++++++++++++++++ net/wireless/core.h | 7 +++ net/wireless/sysfs.c | 8 ++- 4 files changed, 226 insertions(+), 5 deletions(-) diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 66a75723f559..392576342661 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -5301,12 +5301,17 @@ struct cfg80211_cqm_config; * wiphy_lock - lock the wiphy * @wiphy: the wiphy to lock * - * This is mostly exposed so it can be done around registering and - * unregistering netdevs that aren't created through cfg80211 calls, - * since that requires locking in cfg80211 when the notifiers is - * called, but that cannot differentiate which way it's called. + * This is needed around registering and unregistering netdevs that + * aren't created through cfg80211 calls, since that requires locking + * in cfg80211 when the notifiers is called, but that cannot + * differentiate which way it's called. + * + * It can also be used by drivers for their own purposes. * * When cfg80211 ops are called, the wiphy is already locked. + * + * Note that this makes sure that no workers that have been queued + * with wiphy_queue_work() are running. */ static inline void wiphy_lock(struct wiphy *wiphy) __acquires(&wiphy->mtx) @@ -5326,6 +5331,88 @@ static inline void wiphy_unlock(struct wiphy *wiphy) mutex_unlock(&wiphy->mtx); } +struct wiphy_work; +typedef void (*wiphy_work_func_t)(struct wiphy *, struct wiphy_work *); + +struct wiphy_work { + struct list_head entry; + wiphy_work_func_t func; +}; + +static inline void wiphy_work_init(struct wiphy_work *work, + wiphy_work_func_t func) +{ + INIT_LIST_HEAD(&work->entry); + work->func = func; +} + +/** + * wiphy_work_queue - queue work for the wiphy + * @wiphy: the wiphy to queue for + * @work: the work item + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work); + +/** + * wiphy_work_cancel - cancel previously queued work + * @wiphy: the wiphy, for debug purposes + * @work: the work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work); + +struct wiphy_delayed_work { + struct wiphy_work work; + struct wiphy *wiphy; + struct timer_list timer; +}; + +void wiphy_delayed_work_timer(struct timer_list *t); + +static inline void wiphy_delayed_work_init(struct wiphy_delayed_work *dwork, + wiphy_work_func_t func) +{ + timer_setup(&dwork->timer, wiphy_delayed_work_timer, 0); + wiphy_work_init(&dwork->work, func); +} + +/** + * wiphy_delayed_work_queue - queue delayed work for the wiphy + * @wiphy: the wiphy to queue for + * @dwork: the delayable worker + * @delay: number of jiffies to wait before queueing + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay); + +/** + * wiphy_delayed_work_cancel - cancel previously queued delayed work + * @wiphy: the wiphy, for debug purposes + * @dwork: the delayed work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork); + /** * struct wireless_dev - wireless device state * diff --git a/net/wireless/core.c b/net/wireless/core.c index d51d27ff3729..788ca1055d6a 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -410,6 +410,34 @@ static void cfg80211_propagate_cac_done_wk(struct work_struct *work) rtnl_unlock(); } +static void cfg80211_wiphy_work(struct work_struct *work) +{ + struct cfg80211_registered_device *rdev; + struct wiphy_work *wk; + + rdev = container_of(work, struct cfg80211_registered_device, wiphy_work); + + wiphy_lock(&rdev->wiphy); + if (rdev->suspended) + goto out; + + spin_lock_irq(&rdev->wiphy_work_lock); + wk = list_first_entry_or_null(&rdev->wiphy_work_list, + struct wiphy_work, entry); + if (wk) { + list_del_init(&wk->entry); + if (!list_empty(&rdev->wiphy_work_list)) + schedule_work(work); + spin_unlock_irq(&rdev->wiphy_work_lock); + + wk->func(&rdev->wiphy, wk); + } else { + spin_unlock_irq(&rdev->wiphy_work_lock); + } +out: + wiphy_unlock(&rdev->wiphy); +} + /* exported functions */ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, @@ -535,6 +563,9 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, return NULL; } + INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work); + INIT_LIST_HEAD(&rdev->wiphy_work_list); + spin_lock_init(&rdev->wiphy_work_lock); INIT_WORK(&rdev->rfkill_block, cfg80211_rfkill_block_work); INIT_WORK(&rdev->conn_work, cfg80211_conn_work); INIT_WORK(&rdev->event_work, cfg80211_event_work); @@ -1002,6 +1033,31 @@ void wiphy_rfkill_start_polling(struct wiphy *wiphy) } EXPORT_SYMBOL(wiphy_rfkill_start_polling); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev) +{ + unsigned int runaway_limit = 100; + unsigned long flags; + + lockdep_assert_held(&rdev->wiphy.mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + while (!list_empty(&rdev->wiphy_work_list)) { + struct wiphy_work *wk; + + wk = list_first_entry(&rdev->wiphy_work_list, + struct wiphy_work, entry); + list_del_init(&wk->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + wk->func(&rdev->wiphy, wk); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (WARN_ON(--runaway_limit == 0)) + INIT_LIST_HEAD(&rdev->wiphy_work_list); + } + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} + void wiphy_unregister(struct wiphy *wiphy) { struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); @@ -1040,9 +1096,14 @@ void wiphy_unregister(struct wiphy *wiphy) cfg80211_rdev_list_generation++; device_del(&rdev->wiphy.dev); + /* surely nothing is reachable now, clean up work */ + cfg80211_process_wiphy_works(rdev); wiphy_unlock(&rdev->wiphy); rtnl_unlock(); + /* this has nothing to do now but make sure it's gone */ + cancel_work_sync(&rdev->wiphy_work); + flush_work(&rdev->scan_done_wk); cancel_work_sync(&rdev->conn_work); flush_work(&rdev->event_work); @@ -1522,6 +1583,66 @@ static struct pernet_operations cfg80211_pernet_ops = { .exit = cfg80211_pernet_exit, }; +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (list_empty(&work->entry)) + list_add_tail(&work->entry, &rdev->wiphy_work_list); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + schedule_work(&rdev->wiphy_work); +} +EXPORT_SYMBOL_GPL(wiphy_work_queue); + +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + lockdep_assert_held(&wiphy->mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (!list_empty(&work->entry)) + list_del_init(&work->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} +EXPORT_SYMBOL_GPL(wiphy_work_cancel); + +void wiphy_delayed_work_timer(struct timer_list *t) +{ + struct wiphy_delayed_work *dwork = from_timer(dwork, t, timer); + + wiphy_work_queue(dwork->wiphy, &dwork->work); +} +EXPORT_SYMBOL(wiphy_delayed_work_timer); + +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay) +{ + if (!delay) { + wiphy_work_queue(wiphy, &dwork->work); + return; + } + + dwork->wiphy = wiphy; + mod_timer(&dwork->timer, jiffies + delay); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_queue); + +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork) +{ + lockdep_assert_held(&wiphy->mtx); + + del_timer_sync(&dwork->timer); + wiphy_work_cancel(wiphy, &dwork->work); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_cancel); + static int __init cfg80211_init(void) { int err; diff --git a/net/wireless/core.h b/net/wireless/core.h index 1720abf36f92..18d30f6fa7ca 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -103,6 +103,12 @@ struct cfg80211_registered_device { /* lock for all wdev lists */ spinlock_t mgmt_registrations_lock; + struct work_struct wiphy_work; + struct list_head wiphy_work_list; + /* protects the list above */ + spinlock_t wiphy_work_lock; + bool suspended; + /* must be last because of the way we do wiphy_priv(), * and it should at least be aligned to NETDEV_ALIGN */ struct wiphy wiphy __aligned(NETDEV_ALIGN); @@ -457,6 +463,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, struct net_device *dev, enum nl80211_iftype ntype, struct vif_params *params); void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev); void cfg80211_process_wdev_events(struct wireless_dev *wdev); bool cfg80211_does_bw_fit_range(const struct ieee80211_freq_range *freq_range, diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c index 0c3f05c9be27..4d3b65803010 100644 --- a/net/wireless/sysfs.c +++ b/net/wireless/sysfs.c @@ -5,7 +5,7 @@ * * Copyright 2005-2006 Jiri Benc <jbenc@suse.cz> * Copyright 2006 Johannes Berg <johannes@sipsolutions.net> - * Copyright (C) 2020-2021 Intel Corporation + * Copyright (C) 2020-2021, 2023 Intel Corporation */ #include <linux/device.h> @@ -105,14 +105,18 @@ static int wiphy_suspend(struct device *dev) cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); } + cfg80211_process_wiphy_works(rdev); if (rdev->ops->suspend) ret = rdev_suspend(rdev, rdev->wiphy.wowlan_config); if (ret == 1) { /* Driver refuse to configure wowlan */ cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); + cfg80211_process_wiphy_works(rdev); ret = rdev_suspend(rdev, NULL); } + if (ret == 0) + rdev->suspended = true; } wiphy_unlock(&rdev->wiphy); rtnl_unlock(); @@ -132,6 +136,8 @@ static int wiphy_resume(struct device *dev) wiphy_lock(&rdev->wiphy); if (rdev->wiphy.registered && rdev->ops->resume) ret = rdev_resume(rdev); + rdev->suspended = false; + schedule_work(&rdev->wiphy_work); wiphy_unlock(&rdev->wiphy); if (ret) -- 2.53.0.rc2.2.g2258446484
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit 777b26002b73127e81643d9286fadf3d41e0e477 ] Again, to have the wiphy locked for it. Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> [ Summary of conflict resolutions: - In mlme.c, move only tdls_peer_del_work to wiphy work, and none the other works ] Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- net/mac80211/ieee80211_i.h | 4 ++-- net/mac80211/mlme.c | 7 ++++--- net/mac80211/tdls.c | 11 ++++++----- 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 8d6616f646e7..306359d43571 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -542,7 +542,7 @@ struct ieee80211_if_managed { /* TDLS support */ u8 tdls_peer[ETH_ALEN] __aligned(2); - struct delayed_work tdls_peer_del_work; + struct wiphy_delayed_work tdls_peer_del_work; struct sk_buff *orig_teardown_skb; /* The original teardown skb */ struct sk_buff *teardown_skb; /* A copy to send through the AP */ spinlock_t teardown_lock; /* To lock changing teardown_skb */ @@ -2494,7 +2494,7 @@ int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev, size_t extra_ies_len); int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, const u8 *peer, enum nl80211_tdls_operation oper); -void ieee80211_tdls_peer_del_work(struct work_struct *wk); +void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk); int ieee80211_tdls_channel_switch(struct wiphy *wiphy, struct net_device *dev, const u8 *addr, u8 oper_class, struct cfg80211_chan_def *chandef); diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index d147760e8389..25468d5e874a 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -4890,8 +4890,8 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata) INIT_WORK(&ifmgd->csa_connection_drop_work, ieee80211_csa_connection_drop_work); INIT_WORK(&ifmgd->request_smps_work, ieee80211_request_smps_mgd_work); - INIT_DELAYED_WORK(&ifmgd->tdls_peer_del_work, - ieee80211_tdls_peer_del_work); + wiphy_delayed_work_init(&ifmgd->tdls_peer_del_work, + ieee80211_tdls_peer_del_work); timer_setup(&ifmgd->timer, ieee80211_sta_timer, 0); timer_setup(&ifmgd->bcn_mon_timer, ieee80211_sta_bcn_mon_timer, 0); timer_setup(&ifmgd->conn_mon_timer, ieee80211_sta_conn_mon_timer, 0); @@ -6010,7 +6010,8 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata) cancel_work_sync(&ifmgd->request_smps_work); cancel_work_sync(&ifmgd->csa_connection_drop_work); cancel_work_sync(&ifmgd->chswitch_work); - cancel_delayed_work_sync(&ifmgd->tdls_peer_del_work); + wiphy_delayed_work_cancel(sdata->local->hw.wiphy, + &ifmgd->tdls_peer_del_work); sdata_lock(sdata); if (ifmgd->assoc_data) { diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c index 137be9ec94af..c2d7479c119a 100644 --- a/net/mac80211/tdls.c +++ b/net/mac80211/tdls.c @@ -21,7 +21,7 @@ /* give usermode some time for retries in setting up the TDLS session */ #define TDLS_PEER_SETUP_TIMEOUT (15 * HZ) -void ieee80211_tdls_peer_del_work(struct work_struct *wk) +void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk) { struct ieee80211_sub_if_data *sdata; struct ieee80211_local *local; @@ -1126,9 +1126,9 @@ ieee80211_tdls_mgmt_setup(struct wiphy *wiphy, struct net_device *dev, return ret; } - ieee80211_queue_delayed_work(&sdata->local->hw, - &sdata->u.mgd.tdls_peer_del_work, - TDLS_PEER_SETUP_TIMEOUT); + wiphy_delayed_work_queue(sdata->local->hw.wiphy, + &sdata->u.mgd.tdls_peer_del_work, + TDLS_PEER_SETUP_TIMEOUT); return 0; out_unlock: @@ -1425,7 +1425,8 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, } if (ret == 0 && ether_addr_equal(sdata->u.mgd.tdls_peer, peer)) { - cancel_delayed_work(&sdata->u.mgd.tdls_peer_del_work); + wiphy_delayed_work_cancel(sdata->local->hw.wiphy, + &sdata->u.mgd.tdls_peer_del_work); eth_zero_addr(sdata->u.mgd.tdls_peer); } -- 2.53.0.rc2.2.g2258446484
{ "author": "=?UTF-8?q?Hanne-Lotta=20M=C3=A4enp=C3=A4=C3=A4?= <hannelotta@gmail.com>", "date": "Mon, 2 Feb 2026 18:50:38 +0200", "thread_id": "20260202165038.215693-1-hannelotta@gmail.com.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Currently hrtimer_interrupt() runs expired timers, which can re-arm themselves, after which it computes the next expiration time and re-programs the hardware. However, things like HRTICK, a highres timer driving preemption, cannot re-arm itself at the point of running, since the next task has not been determined yet. The schedule() in the interrupt return path will switch to the next task, which then causes a new hrtimer to be programmed. This then results in reprogramming the hardware at least twice, once after running the timers, and once upon selecting the new task. Notably, *both* events happen in the interrupt. By pushing the hrtimer reprogram all the way into the interrupt return path, it runs after schedule() and this double reprogram can be avoided. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- include/asm-generic/thread_info_tif.h | 5 ++++- include/linux/hrtimer.h | 17 +++++++++++++++++ include/linux/irq-entry-common.h | 2 ++ kernel/entry/common.c | 13 +++++++++++++ kernel/sched/core.c | 10 ++++++++++ kernel/time/hrtimer.c | 28 ++++++++++++++++++++++++---- 6 files changed, 70 insertions(+), 5 deletions(-) --- a/include/asm-generic/thread_info_tif.h +++ b/include/asm-generic/thread_info_tif.h @@ -41,11 +41,14 @@ #define _TIF_PATCH_PENDING BIT(TIF_PATCH_PENDING) #ifdef HAVE_TIF_RESTORE_SIGMASK -# define TIF_RESTORE_SIGMASK 10 // Restore signal mask in do_signal() */ +# define TIF_RESTORE_SIGMASK 10 // Restore signal mask in do_signal() # define _TIF_RESTORE_SIGMASK BIT(TIF_RESTORE_SIGMASK) #endif #define TIF_RSEQ 11 // Run RSEQ fast path #define _TIF_RSEQ BIT(TIF_RSEQ) +#define TIF_HRTIMER_REARM 12 // re-arm the timer +#define _TIF_HRTIMER_REARM BIT(TIF_HRTIMER_REARM) + #endif /* _ASM_GENERIC_THREAD_INFO_TIF_H_ */ --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -175,10 +175,27 @@ extern void hrtimer_interrupt(struct clo extern unsigned int hrtimer_resolution; +#ifdef TIF_HRTIMER_REARM +extern void _hrtimer_rearm(void); +/* + * This is to be called on all irqentry_exit() paths that will enable + * interrupts; as well as in the context switch path before switch_to(). + */ +static inline void hrtimer_rearm(void) +{ + if (test_thread_flag(TIF_HRTIMER_REARM)) + _hrtimer_rearm(); +} +#else +static inline void hrtimer_rearm(void) { } +#endif /* TIF_HRTIMER_REARM */ + #else #define hrtimer_resolution (unsigned int)LOW_RES_NSEC +static inline void hrtimer_rearm(void) { } + #endif static inline ktime_t --- a/include/linux/irq-entry-common.h +++ b/include/linux/irq-entry-common.h @@ -224,6 +224,8 @@ static __always_inline void __exit_to_us ti_work = read_thread_flags(); if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) ti_work = exit_to_user_mode_loop(regs, ti_work); + else + hrtimer_rearm(); arch_exit_to_user_mode_prepare(regs, ti_work); } --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -7,6 +7,7 @@ #include <linux/kmsan.h> #include <linux/livepatch.h> #include <linux/tick.h> +#include <linux/hrtimer.h> /* Workaround to allow gradual conversion of architecture code */ void __weak arch_do_signal_or_restart(struct pt_regs *regs) { } @@ -26,6 +27,16 @@ static __always_inline unsigned long __e */ while (ti_work & EXIT_TO_USER_MODE_WORK_LOOP) { + /* + * If hrtimer need re-arming, do so before enabling IRQs, + * except when a reschedule is needed, in that case schedule() + * will do this. + */ + if ((ti_work & (_TIF_NEED_RESCHED | + _TIF_NEED_RESCHED_LAZY | + _TIF_HRTIMER_REARM)) == _TIF_HRTIMER_REARM) + hrtimer_rearm(); + local_irq_enable_exit_to_user(ti_work); if (ti_work & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) @@ -202,6 +213,7 @@ noinstr void irqentry_exit(struct pt_reg */ if (state.exit_rcu) { instrumentation_begin(); + hrtimer_rearm(); /* Tell the tracer that IRET will enable interrupts */ trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); @@ -215,6 +227,7 @@ noinstr void irqentry_exit(struct pt_reg if (IS_ENABLED(CONFIG_PREEMPTION)) irqentry_exit_cond_resched(); + hrtimer_rearm(); /* Covers both tracing and lockdep */ trace_hardirqs_on(); instrumentation_end(); --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6814,6 +6814,16 @@ static void __sched notrace __schedule(i keep_resched: rq->last_seen_need_resched_ns = 0; + /* + * Notably, this must be called after pick_next_task() but before + * switch_to(), since the new task need not be on the return from + * interrupt path. Additionally, exit_to_user_mode_loop() relies on + * any schedule() call to imply this call, so do it unconditionally. + * + * We've just cleared TIF_NEED_RESCHED, TIF word should be in cache. + */ + hrtimer_rearm(); + is_switch = prev != next; if (likely(is_switch)) { rq->nr_switches++; --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1892,10 +1892,9 @@ static __latent_entropy void hrtimer_run * Very similar to hrtimer_force_reprogram(), except it deals with * in_hrirq and hang_detected. */ -static void __hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now) +static void __hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, + ktime_t now, ktime_t expires_next) { - ktime_t expires_next = hrtimer_update_next_event(cpu_base); - cpu_base->expires_next = expires_next; cpu_base->in_hrtirq = 0; @@ -1970,9 +1969,30 @@ void hrtimer_interrupt(struct clock_even cpu_base->hang_detected = 1; } - __hrtimer_rearm(cpu_base, now); +#ifdef TIF_HRTIMER_REARM + set_thread_flag(TIF_HRTIMER_REARM); +#else + __hrtimer_rearm(cpu_base, now, expires_next); +#endif raw_spin_unlock_irqrestore(&cpu_base->lock, flags); } + +#ifdef TIF_HRTIMER_REARM +void _hrtimer_rearm(void) +{ + struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); + ktime_t now, expires_next; + + lockdep_assert_irqs_disabled(); + + scoped_guard (raw_spinlock, &cpu_base->lock) { + now = hrtimer_update_base(cpu_base); + expires_next = hrtimer_update_next_event(cpu_base); + __hrtimer_rearm(cpu_base, now, expires_next); + clear_thread_flag(TIF_HRTIMER_REARM); + } +} +#endif /* TIF_HRTIMER_REARM */ #endif /* !CONFIG_HIGH_RES_TIMERS */ /*
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Wed, 21 Jan 2026 17:20:15 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Hi! At long last a new version of the hrtick rework! The previous version had a mysterious deadlock which has been resolved. So far (weeks) the 0-day robot has not found more issues with these patches. These patches aim to reduce the hrtick overhead to such an extent that it can be default enabled. Decoupling the preemption behaviour from CONFIG_HZ, leaving only load-balancing and time keeping dependent on HZ. Some (limited) performance runs from 0-day have also not found any regressions from enabling HRTICK, but it has not run the full suite yet (AFAIU). Patches also at: git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/hrtick
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Wed, 21 Jan 2026 17:20:10 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Upon schedule() HRTICK will cancel the current timer, pick the next task and reprogram the timer. When schedule() consistently triggers due to blocking conditions instead of the timer, this leads to endless reprogramming without ever firing. Mitigate this with a new hrtimer mode: fuzzy (not really happy with that name); this mode does two things: - skip reprogramming the hardware on timer remove; - skip reprogramming the hardware when the new timer is after cpu_base->expires_next Both things are already possible; - removing a remote timer will leave the hardware programmed and cause a spurious interrupt. - this remote CPU adding a timer can skip the reprogramming when the timer's expiration is after the (spurious) expiration. This new timer mode simply causes more of this 'fuzzy' behaviour; it causes a few spurious interrupts, but similarly avoids endlessly reprogramming the timer. This makes the HRTICK match the NO_HRTICK hackbench runs -- the case where a task never runs until its slice is complete but always goes sleep early. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- include/linux/hrtimer.h | 1 + include/linux/hrtimer_types.h | 1 + kernel/sched/core.c | 3 ++- kernel/time/hrtimer.c | 16 +++++++++++++++- 4 files changed, 19 insertions(+), 2 deletions(-) --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -38,6 +38,7 @@ enum hrtimer_mode { HRTIMER_MODE_PINNED = 0x02, HRTIMER_MODE_SOFT = 0x04, HRTIMER_MODE_HARD = 0x08, + HRTIMER_MODE_FUZZY = 0x10, HRTIMER_MODE_ABS_PINNED = HRTIMER_MODE_ABS | HRTIMER_MODE_PINNED, HRTIMER_MODE_REL_PINNED = HRTIMER_MODE_REL | HRTIMER_MODE_PINNED, --- a/include/linux/hrtimer_types.h +++ b/include/linux/hrtimer_types.h @@ -45,6 +45,7 @@ struct hrtimer { u8 is_rel; u8 is_soft; u8 is_hard; + u8 is_fuzzy; }; #endif /* _LINUX_HRTIMER_TYPES_H */ --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -928,7 +928,8 @@ void hrtick_start(struct rq *rq, u64 del static void hrtick_rq_init(struct rq *rq) { INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq); - hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); + hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, + HRTIMER_MODE_REL_HARD | HRTIMER_MODE_FUZZY); } #else /* !CONFIG_SCHED_HRTICK: */ static inline void hrtick_clear(struct rq *rq) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1122,7 +1122,7 @@ static void __remove_hrtimer(struct hrti * an superfluous call to hrtimer_force_reprogram() on the * remote cpu later on if the same timer gets enqueued again. */ - if (reprogram && timer == cpu_base->next_timer) + if (!timer->is_fuzzy && reprogram && timer == cpu_base->next_timer) hrtimer_force_reprogram(cpu_base, 1); } @@ -1269,6 +1269,19 @@ static int __hrtimer_start_range_ns(stru if (new_base->cpu_base->in_hrtirq) return 0; + if (timer->is_fuzzy) { + /* + * XXX fuzzy implies pinned! not sure how to deal with + * retrigger_next_event() for the !local case. + */ + WARN_ON_ONCE(!(mode & HRTIMER_MODE_PINNED)); + /* + * Notably, by going into hrtimer_reprogram(), it will + * not reprogram if cpu_base->expires_next is earlier. + */ + return first; + } + if (!force_local) { /* * If the current CPU base is online, then the timer is @@ -1645,6 +1658,7 @@ static void __hrtimer_setup(struct hrtim base += hrtimer_clockid_to_base(clock_id); timer->is_soft = softtimer; timer->is_hard = !!(mode & HRTIMER_MODE_HARD); + timer->is_fuzzy = !!(mode & HRTIMER_MODE_FUZZY); timer->base = &cpu_base->clock_base[base]; timerqueue_init(&timer->node);
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Wed, 21 Jan 2026 17:20:13 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Rework hrtimer_interrupt() such that reprogramming is split out into an independent function at the end of the interrupt. This prepares for reprogramming getting delayed beyond the end of hrtimer_interrupt(). Notably, this changes the hang handling to always wait 100ms instead of trying to keep it proportional to the actual delay. This simplifies the state, also this really shouldn't be happening. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 87 ++++++++++++++++++++++---------------------------- 1 file changed, 39 insertions(+), 48 deletions(-) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1889,6 +1889,29 @@ static __latent_entropy void hrtimer_run #ifdef CONFIG_HIGH_RES_TIMERS /* + * Very similar to hrtimer_force_reprogram(), except it deals with + * in_hrirq and hang_detected. + */ +static void __hrtimer_rearm(struct hrtimer_cpu_base *cpu_base, ktime_t now) +{ + ktime_t expires_next = hrtimer_update_next_event(cpu_base); + + cpu_base->expires_next = expires_next; + cpu_base->in_hrtirq = 0; + + if (unlikely(cpu_base->hang_detected)) { + /* + * Give the system a chance to do something else than looping + * on hrtimer interrupts. + */ + expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC); + cpu_base->hang_detected = 0; + } + + tick_program_event(expires_next, 1); +} + +/* * High resolution timer interrupt * Called with interrupts disabled */ @@ -1924,63 +1947,31 @@ void hrtimer_interrupt(struct clock_even __hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD); - /* Reevaluate the clock bases for the [soft] next expiry */ - expires_next = hrtimer_update_next_event(cpu_base); - /* - * Store the new expiry value so the migration code can verify - * against it. - */ - cpu_base->expires_next = expires_next; - cpu_base->in_hrtirq = 0; - raw_spin_unlock_irqrestore(&cpu_base->lock, flags); - - /* Reprogramming necessary ? */ - if (!tick_program_event(expires_next, 0)) { - cpu_base->hang_detected = 0; - return; - } - /* * The next timer was already expired due to: * - tracing * - long lasting callbacks * - being scheduled away when running in a VM * - * We need to prevent that we loop forever in the hrtimer - * interrupt routine. We give it 3 attempts to avoid - * overreacting on some spurious event. - * - * Acquire base lock for updating the offsets and retrieving - * the current time. + * We need to prevent that we loop forever in the hrtiner interrupt + * routine. We give it 3 attempts to avoid overreacting on some + * spurious event. */ - raw_spin_lock_irqsave(&cpu_base->lock, flags); + expires_next = hrtimer_update_next_event(cpu_base); now = hrtimer_update_base(cpu_base); - cpu_base->nr_retries++; - if (++retries < 3) - goto retry; - /* - * Give the system a chance to do something else than looping - * here. We stored the entry time, so we know exactly how long - * we spent here. We schedule the next event this amount of - * time away. - */ - cpu_base->nr_hangs++; - cpu_base->hang_detected = 1; - raw_spin_unlock_irqrestore(&cpu_base->lock, flags); + if (expires_next < now) { + if (++retries < 3) + goto retry; + + delta = ktime_sub(now, entry_time); + cpu_base->max_hang_time = max_t(unsigned int, + cpu_base->max_hang_time, delta); + cpu_base->nr_hangs++; + cpu_base->hang_detected = 1; + } - delta = ktime_sub(now, entry_time); - if ((unsigned int)delta > cpu_base->max_hang_time) - cpu_base->max_hang_time = (unsigned int) delta; - /* - * Limit it to a sensible value as we enforce a longer - * delay. Give the CPU at least 100ms to catch up. - */ - if (delta > 100 * NSEC_PER_MSEC) - expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC); - else - expires_next = ktime_add(now, delta); - tick_program_event(expires_next, 1); - pr_warn_once("hrtimer: interrupt took %llu ns\n", ktime_to_ns(delta)); + __hrtimer_rearm(cpu_base, now); + raw_spin_unlock_irqrestore(&cpu_base->lock, flags); } #endif /* !CONFIG_HIGH_RES_TIMERS */
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Wed, 21 Jan 2026 17:20:14 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
The nominal duration for an EEVDF task to run is until its deadline. At which point the deadline is moved ahead and a new task selection is done. Try and predict the time 'lost' to higher scheduling classes. Since this is an estimate, the timer can be both early or late. In case it is early task_tick_fair() will take the !need_resched() path and restarts the timer. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/sched/fair.c | 55 +++++++++++++++++++++++++++++----------------------- 1 file changed, 31 insertions(+), 24 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5511,7 +5511,7 @@ static void put_prev_entity(struct cfs_r } static void -entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) +entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) { /* * Update run-time statistics of the 'current'. @@ -5523,17 +5523,6 @@ entity_tick(struct cfs_rq *cfs_rq, struc */ update_load_avg(cfs_rq, curr, UPDATE_TG); update_cfs_group(curr); - -#ifdef CONFIG_SCHED_HRTICK - /* - * queued ticks are scheduled to match the slice, so don't bother - * validating it and just reschedule. - */ - if (queued) { - resched_curr_lazy(rq_of(cfs_rq)); - return; - } -#endif } @@ -6735,21 +6724,39 @@ static inline void sched_fair_update_sto static void hrtick_start_fair(struct rq *rq, struct task_struct *p) { struct sched_entity *se = &p->se; + unsigned long scale = 1024; + unsigned long util = 0; + u64 vdelta; + u64 delta; WARN_ON_ONCE(task_rq(p) != rq); - if (rq->cfs.h_nr_queued > 1) { - u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime; - u64 slice = se->slice; - s64 delta = slice - ran; - - if (delta < 0) { - if (task_current_donor(rq, p)) - resched_curr(rq); - return; - } - hrtick_start(rq, delta); + if (rq->cfs.h_nr_queued <= 1) + return; + + /* + * Compute time until virtual deadline + */ + vdelta = se->deadline - se->vruntime; + if ((s64)vdelta < 0) { + if (task_current_donor(rq, p)) + resched_curr(rq); + return; + } + delta = (se->load.weight * vdelta) / NICE_0_LOAD; + + /* + * Correct for instantaneous load of other classes. + */ + util += cpu_util_dl(rq); + util += cpu_util_rt(rq); + util += cpu_util_irq(rq); + if (util && util < 1024) { + scale *= 1024; + scale /= (1024 - util); } + + hrtick_start(rq, (scale * delta) / 1024); } /* @@ -13373,7 +13380,7 @@ static void task_tick_fair(struct rq *rq for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - entity_tick(cfs_rq, se, queued); + entity_tick(cfs_rq, se); } if (queued) {
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Wed, 21 Jan 2026 17:20:11 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
... for generic entry architectures. This decouples preemption from CONFIG_HZ, leaving only the periodic load-balancer and various accounting things relying on the tick. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/sched/features.h | 5 +++++ 1 file changed, 5 insertions(+) --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -63,8 +63,13 @@ SCHED_FEAT(DELAY_ZERO, true) */ SCHED_FEAT(WAKEUP_PREEMPTION, true) +#ifdef TIF_HRTIMER_REARM +SCHED_FEAT(HRTICK, true) +SCHED_FEAT(HRTICK_DL, true) +#else SCHED_FEAT(HRTICK, false) SCHED_FEAT(HRTICK_DL, false) +#endif /* * Decrement CPU capacity based on time not spent running tasks
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Wed, 21 Jan 2026 17:20:16 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Hi Peter, On Wed, Jan 21, 2026 at 05:20:16PM +0100 Peter Zijlstra wrote: I maybe be missing something. But the title of this patch and the above code do not seem to match. Cheers, Phil --
{ "author": "Phil Auld <pauld@redhat.com>", "date": "Wed, 21 Jan 2026 17:24:44 -0500", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Hello, On 21/01/26 17:20, Peter Zijlstra wrote: ... Nit.. guess we don't fear overflow since vdelta should be bounded anyway. Reviewed-by: Juri Lelli <juri.lelli@redhat.com> Thanks, Juri
{ "author": "Juri Lelli <juri.lelli@redhat.com>", "date": "Thu, 22 Jan 2026 11:53:34 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Hello, On 21/01/26 17:20, Peter Zijlstra wrote: Reviewed-by: Juri Lelli <juri.lelli@redhat.com> Thanks, Juri
{ "author": "Juri Lelli <juri.lelli@redhat.com>", "date": "Thu, 22 Jan 2026 12:00:14 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Wed, Jan 21, 2026 at 05:24:44PM -0500, Phil Auld wrote: Arguably this should be CONFIG_GENERIC_ENTRY I suppose You mean it only default enables it for a subset of architectures?
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Thu, 22 Jan 2026 12:40:54 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Thu, Jan 22, 2026 at 12:40:54PM +0100 Peter Zijlstra wrote: Nope, I mean I can't read... nevermind. Cheers, Phil --
{ "author": "Phil Auld <pauld@redhat.com>", "date": "Thu, 22 Jan 2026 07:31:17 -0500", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
Hello, On 21/01/26 17:20, Peter Zijlstra wrote: Does the more common (lazier :) 'lazy' work better? ... Not sure either, but since it's improving things for local already, maybe it's an acceptable first step? Reviewed-by: Juri Lelli <juri.lelli@redhat.com> Thanks, Juri
{ "author": "Juri Lelli <juri.lelli@redhat.com>", "date": "Thu, 22 Jan 2026 14:12:28 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Thu, 22 Jan 2026 14:12:28 +0100 Juri Lelli <juri.lelli@redhat.com> wrote: I don't like either fuzzy or lazy. Fuzzy makes me think of just random entries (for fuzz testing and such). Lazy is to postpone things to do things less often. What about "speculative"? Like branch prediction and such. Where a timer is expected to be used at a certain time but it may not be? -- Steve
{ "author": "Steven Rostedt <rostedt@goodmis.org>", "date": "Fri, 23 Jan 2026 15:04:50 -0500", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Wed, 21 Jan 2026 17:20:15 +0100 Peter Zijlstra <peterz@infradead.org> wrote: I'm curious to why you decided to use scoped_guard() here and not just guard() and not add the extra indentation? The function is small enough where everything is expected to be protected by the spinlock. -- Steve
{ "author": "Steven Rostedt <rostedt@goodmis.org>", "date": "Fri, 23 Jan 2026 15:08:43 -0500", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Fri, Jan 23, 2026 at 03:08:43PM -0500, Steven Rostedt wrote: Yeah, I'm not entirely sure... its been over 6 months since I wrote this code :-/
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Fri, 23 Jan 2026 22:04:33 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Wed, Jan 21 2026 at 17:20, Peter Zijlstra wrote: Reviewed-by: Thomas Gleixner <tglx@kernel.org>
{ "author": "Thomas Gleixner <tglx@linutronix.de>", "date": "Mon, 02 Feb 2026 13:28:12 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Wed, Jan 21 2026 at 17:20, Peter Zijlstra wrote: SHouldn't this be HRTIMER_MODE_REL_PINNED_HARD? I know it's set when starting the timer, but I had to double check it. I'd rather say: Fuzzy requires pinned as the lazy reprogramming only works for CPU local timers. Other than that: Reviewed-by: Thomas Gleixner <tglx@kernel.org> Thanks, tglx
{ "author": "Thomas Gleixner <tglx@linutronix.de>", "date": "Mon, 02 Feb 2026 15:02:26 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Wed, Jan 21 2026 at 17:20, Peter Zijlstra wrote: Indeed. in_hrtirq Reviewed-by: Thomas Gleixner <tglx@kernel.org>
{ "author": "Thomas Gleixner <tglx@linutronix.de>", "date": "Mon, 02 Feb 2026 15:05:14 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Wed, Jan 21 2026 at 17:20, Peter Zijlstra wrote: Two things I'm not convinced that they are handled correctly: 1) Interrupts After reenabling interrupts and before reaching schedule() an interrupt comes in and runs soft interrupt processing for a while on the way back, which delays the update until that processing completes. 2) Time slice extension When the time slice is granted this will not rearm the clockevent device unless the slice hrtimer becomes the first expiring timer on that CPU, but even then that misses the full reevaluation of the next timer event. in hrtimer.h where you already have the #ifdef TIF_HRTIMER_REARM section: static inline bool hrtimer_set_rearm_delayed() { set_thread_flag(TIF_HRTIMER_REARM); return true; } and a empty stub returning false for the other case then this becomes: if (!hrtimer_set_rearm_delayed()) hrtimer_rearm(...); and the ugly ifdef in the code goes away. Grr. I had to read this five times to figure out that we now have hrtimer_rearm() _hrtimer_rearm() __hrtimer_rearm() You clearly ran out of characters to make that obvious: hrtimer_rearm_delayed() hrtimer_rearm() hrtimer_do_rearm() or something like that. Thanks, tglx
{ "author": "Thomas Gleixner <tglx@linutronix.de>", "date": "Mon, 02 Feb 2026 15:37:13 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH v2 2/6] hrtimer: Optimize __hrtimer_start_range_ns()
Much like hrtimer_reprogram(), skip programming if the cpu_base is running the hrtimer interrupt. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/time/hrtimer.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1261,6 +1261,14 @@ static int __hrtimer_start_range_ns(stru } first = enqueue_hrtimer(timer, new_base, mode); + + /* + * If the hrtimer interrupt is running, then it will reevaluate the + * clock bases and reprogram the clock event device. + */ + if (new_base->cpu_base->in_hrtirq) + return 0; + if (!force_local) { /* * If the current CPU base is online, then the timer is
On Mon, Feb 02, 2026 at 03:37:13PM +0100, Thomas Gleixner wrote: So the basic thing looks like: <USER-MODE> irqentry_enter() run_irq_on_irqstack_cond() if (user_mode() || hardirq_stack_inuse) irq_enter_rcu(); func_c(); irq_exit_rcu() __irq_exit_rcu() invoke_softirq() irqentry_exit() irqentry_exit_to_user_mode() irqentry_exit_to_user_mode_prepare() __exit_to_user_mode_prepare() exit_to_user_mode_loop() ...here... So a nested IRQ at this point will have !user_mode(), but I think it can still end up in softirqs due to that hardirq_stack_inuse. Should we perhaps make sure only user_mode() ends up in softirqs? Oh crud yes, that should be something like: if (!rseq_grant_slice_extension(ti_work & TIF_SLICE_EXT_DENY)) schedule(); else hrtimer_rearm();
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Mon, 2 Feb 2026 17:33:55 +0100", "thread_id": "20260202163355.GI1395266@noisy.programming.kicks-ass.net.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
From: Arnd Bergmann <arnd@arndb.de> The deeply nested loop in rkvdec_init_v4l2_vp9_count_tbl() needs a lot of registers, so when the clang register allocator runs out, it ends up spilling countless temporaries to the stack: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Marking this function as noinline_for_stack keeps it out of rkvdec_vp9_start(), giving the compiler more room for optimization. The resulting code is good enough that both the total stack usage and the loop get enough better to stay under the warning limit, though it's still slow, and would need a larger rework if this function ends up being called in a fast path. Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c index ba51a7c2fe55..1c875d5a2bac 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c @@ -893,7 +893,8 @@ static void rkvdec_vp9_done(struct rkvdec_ctx *ctx, update_ctx_last_info(vp9_ctx); } -static void rkvdec_init_v4l2_vp9_count_tbl(struct rkvdec_ctx *ctx) +static noinline_for_stack void +rkvdec_init_v4l2_vp9_count_tbl(struct rkvdec_ctx *ctx) { struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; struct rkvdec_vp9_intra_frame_symbol_counts *intra_cnts = vp9_ctx->count_tbl.cpu; -- 2.39.5
{ "author": "Arnd Bergmann <arnd@kernel.org>", "date": "Mon, 2 Feb 2026 10:47:51 +0100", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
Hi Arnd, Le lundi 02 février 2026 à 10:47 +0100, Arnd Bergmann a écrit : We had already addressed and validated that on clang-21, which indicates me that we likely are missing an architecture (or a config) in our CI. Can you document which architecture, configuration and flags was affected so we can add it on our side ? Our media pipeline before sending to Linus and the clang builds trace are in the following link, in case it matters. https://gitlab.freedesktop.org/linux-media/media-committers/-/pipelines/1588731 https://gitlab.freedesktop.org/linux-media/media-committers/-/jobs/91604655 Another observation is that you had to enable ASAN to make it miss-behave on for loop unrolling (with complex bitfield writes). All I've obtained by visiting the Link: is that its armv7-a architecture. We've tried really hard to avoid this noninline_for_stack just because compilers are buggy. I'll have a look again in case I find some ideas, but meanwhile, with failing architecture in the commit message: Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
{ "author": "Nicolas Dufresne <nicolas.dufresne@collabora.com>", "date": "Mon, 02 Feb 2026 08:42:41 -0500", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
On Mon, Feb 2, 2026, at 14:42, Nicolas Dufresne wrote: The configuration that hit this for me was an ARMv7-M NOMMU build. I'm doing 'randconfig' builds here, so I inevitably hit some corner cases that all deterministic CI systems miss. I don't think that you should add ARMv7-M here, since that would take up useful build resources from something more important. There are no drviers/media/ actual users on ARMv7-M, and next time it is going to be something else. Right, this randconfig build likely got closer to the warning limit because of the inherent overhead in KASAN, but the problem with the unaligned bitfields was something that I could later reproduce without KASAN, on ARMv5 and MIPS32r2. This is something we should fix in clang. Thanks! Arnd
{ "author": "\"Arnd Bergmann\" <arnd@arndb.de>", "date": "Mon, 02 Feb 2026 15:09:14 +0100", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
Hi Arnd, Le lundi 02 février 2026 à 15:09 +0100, Arnd Bergmann a écrit : All fair comments. I plan to take this into fixes (no changes needed), hopefully for rc-2. Performance wise, this code is to replace read/mask/write into hardware registers which was significantly slower for this amount of registers (~200 32bit integers) and this type of IP (its not sram). This is run once per frame. In practice, if we hand code the read/mask/write, the performance should eventually converge to using bitfield and letting the compiler do this masking, I was being optimistic on how the compiler would behave. If performance of that is truly a problem, we can always just prepare the ram register ahead of the operation queue (instead of doing it in the executor). One thing to remind, you can't optimize the data structure layout, since they need to match the register layout. But while fixing some of the stack report previously, I did endup up moving few things out of loops (which is not clearly feasible in this patch). I did not checked all the code (only the failing one). One of the bad pattern which costed stack (and overhead probably) was the use of switch() statement to pick one of the unaligned register location, with that switch being part of an unrolled loop. If you ever spot these, and have time, please just manually unroll the switch out of the loop (its actually less code). thanks to you, Nicolas
{ "author": "Nicolas Dufresne <nicolas.dufresne@collabora.com>", "date": "Mon, 02 Feb 2026 10:12:53 -0500", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
On Mon, Feb 2, 2026, at 16:12, Nicolas Dufresne wrote: I think there are multiple things going on here, some of which are more relevant than others: - The problem I'm addressing with my patch is purely a clang issue for CPU architectures with high register pressure when assembling the structure in memory. As a first-order approximation, you can see the lines in the output being 12.000 with clang, but only 600 with gcc in the godbolt.org output. The gcc version isn't that great either, but it is orders of magnitude fewer instructions. - MMIO reads are clearly a performance killer, so assembling the structure in memory and using memcpy_toio() to access the registers as you appear to be doing is the right idea. - using bitfields for hardware structures is non-portable. In particular, the order of the fields within a word depends on byteorder (CONFIG_CPU_BIG_ENDIAN), and the alignment depends on the architecture, e.g. 'struct { u32 a:16: u32 b: 32; u32 c:16}; has the second member cross a u32 boundary, which leads to padding between a and b, as well as after c on some architectures but not others. I would always recommend splitting up bitfields on word boundaries and adding explicit padding where necessary. - Since most of the fields are exactly 6 bits offset from a word boundary, you can try assembling all the *_field_order_cnt* fields in an array first that has all the bits in the correct order, but then shift the entire array six bits. Arnd
{ "author": "\"Arnd Bergmann\" <arnd@arndb.de>", "date": "Mon, 02 Feb 2026 16:59:05 +0100", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
Hi, Le lundi 02 février 2026 à 16:59 +0100, Arnd Bergmann a écrit : Ok, got it, clearly the registers bitfield (which is a set of 32bit bitfield) is fine (appart from endian, but this is deliberatly ignored). These are the one I had mind, and are optimized with copy_toio. For the SPS/PPS bistream, which is shared memory with the IP, I tend to agree this might not have been the ideal choice, though the author did verify everything with pahole for the relevant architectures (in practice only two ARM64 SoC use this bitstream format). I'm happy to revisit this eventually. And would not hurt to share a common bitstream writer, that works with both endian in V4L2 (or use one from the core if that already exist). Nicolas
{ "author": "Nicolas Dufresne <nicolas.dufresne@collabora.com>", "date": "Mon, 02 Feb 2026 11:31:40 -0500", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 1/2] media: rkvdec: reduce excessive stack usage in assemble_hw_pps()
From: Arnd Bergmann <arnd@arndb.de> The rkvdec_pps had a large set of bitfields, all of which as misaligned. This causes clang-21 and likely other versions to produce absolutely awful object code and a warning about very large stack usage, on targets without unaligned access: drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c:966:12: error: stack frame size (1472) exceeds limit (1280) in 'rkvdec_vp9_start' [-Werror,-Wframe-larger-than] Part of the problem here is how all the bitfield accesses are inlined into a function that already has large structures on the stack. Mark set_field_order_cnt() as noinline_for_stack, and split out the following accesses in assemble_hw_pps() into another noinline function, both of which now using around 800 bytes of stack in the same configuration. There is clearly still something wrong with clang here, but splitting it into multiple functions reduces the risk of stack overflow. Fixes: fde24907570d ("media: rkvdec: Add H264 support for the VDPU383 variant") Link: https://godbolt.org/z/acP1eKeq9 Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- .../rockchip/rkvdec/rkvdec-vdpu383-h264.c | 50 ++++++++++--------- 1 file changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c index 6ab3167addc8..ef69f2a36478 100644 --- a/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c +++ b/drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c @@ -130,7 +130,7 @@ struct rkvdec_h264_ctx { struct vdpu383_regs_h26x regs; }; -static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) +static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) { pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; @@ -166,6 +166,31 @@ static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_d pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; } +static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) +{ + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; + + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) + pps->is_longterm |= (1 << i); + pps->ref_field_flags |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; + pps->ref_colmv_use_flag |= + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; + pps->ref_topfield_used |= + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; + pps->ref_botfield_used |= + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; + } + pps->pic_field_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); + pps->pic_associated_flag = + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); + + pps->cur_top_field = dec_params->top_field_order_cnt; + pps->cur_bot_field = dec_params->bottom_field_order_cnt; +} + static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -177,7 +202,6 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; struct rkvdec_sps_pps *hw_ps; u32 pic_width, pic_height; - u32 i; /* * HW read the SPS/PPS information from PPS packet index by PPS id. @@ -261,28 +285,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); set_field_order_cnt(&hw_ps->pps, dpb); + set_dec_params(&hw_ps->pps, dec_params); - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) - hw_ps->pps.is_longterm |= (1 << i); - - hw_ps->pps.ref_field_flags |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; - hw_ps->pps.ref_colmv_use_flag |= - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; - hw_ps->pps.ref_topfield_used |= - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; - hw_ps->pps.ref_botfield_used |= - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; - } - - hw_ps->pps.pic_field_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); - hw_ps->pps.pic_associated_flag = - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); - - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; } static void rkvdec_write_regs(struct rkvdec_ctx *ctx) -- 2.39.5
Le lundi 02 février 2026 à 10:47 +0100, Arnd Bergmann a écrit : Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
{ "author": "Nicolas Dufresne <nicolas.dufresne@collabora.com>", "date": "Mon, 02 Feb 2026 11:32:16 -0500", "thread_id": "ca81b8b03651cdb4f997c89fffd489407be59b8b.camel@collabora.com.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
Set d3hot_delay to 0 for Intel controllers because a delay is not needed. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c index 0f05a15c14c7..bc83caad4197 100644 --- a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c +++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c @@ -164,6 +164,7 @@ static int intel_i3c_init(struct mipi_i3c_hci_pci *hci) dma_set_mask_and_coherent(&hci->pci->dev, DMA_BIT_MASK(64)); hci->pci->d3cold_delay = 0; + hci->pci->d3hot_delay = 0; hci->private = host; host->priv = priv; -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:35 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
Some I3C controller drivers need runtime PM to operate on a device other than the parent device. To support that, add an rpm_dev pointer to struct i3c_master_controller so drivers can specify which device should be used for runtime power management. If a driver does not set rpm_dev explicitly, default to using the parent device to maintain existing behaviour. Update the runtime PM helpers to use rpm_dev instead of dev.parent. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- drivers/i3c/master.c | 9 ++++++--- include/linux/i3c/master.h | 2 ++ 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c index 49fb6e30a68e..bcc493dc9d04 100644 --- a/drivers/i3c/master.c +++ b/drivers/i3c/master.c @@ -108,10 +108,10 @@ static struct i3c_master_controller *dev_to_i3cmaster(struct device *dev) static int __must_check i3c_master_rpm_get(struct i3c_master_controller *master) { - int ret = master->rpm_allowed ? pm_runtime_resume_and_get(master->dev.parent) : 0; + int ret = master->rpm_allowed ? pm_runtime_resume_and_get(master->rpm_dev) : 0; if (ret < 0) { - dev_err(master->dev.parent, "runtime resume failed, error %d\n", ret); + dev_err(master->rpm_dev, "runtime resume failed, error %d\n", ret); return ret; } return 0; @@ -120,7 +120,7 @@ static int __must_check i3c_master_rpm_get(struct i3c_master_controller *master) static void i3c_master_rpm_put(struct i3c_master_controller *master) { if (master->rpm_allowed) - pm_runtime_put_autosuspend(master->dev.parent); + pm_runtime_put_autosuspend(master->rpm_dev); } int i3c_bus_rpm_get(struct i3c_bus *bus) @@ -2975,6 +2975,9 @@ int i3c_master_register(struct i3c_master_controller *master, INIT_LIST_HEAD(&master->boardinfo.i2c); INIT_LIST_HEAD(&master->boardinfo.i3c); + if (!master->rpm_dev) + master->rpm_dev = parent; + ret = i3c_master_rpm_get(master); if (ret) return ret; diff --git a/include/linux/i3c/master.h b/include/linux/i3c/master.h index af2bb48363ba..4be67a902dd8 100644 --- a/include/linux/i3c/master.h +++ b/include/linux/i3c/master.h @@ -501,6 +501,7 @@ struct i3c_master_controller_ops { * registered to the I2C subsystem to be as transparent as possible to * existing I2C drivers * @ops: master operations. See &struct i3c_master_controller_ops + * @rpm_dev: Runtime PM device * @secondary: true if the master is a secondary master * @init_done: true when the bus initialization is done * @hotjoin: true if the master support hotjoin @@ -526,6 +527,7 @@ struct i3c_master_controller { struct i3c_dev_desc *this; struct i2c_adapter i2c; const struct i3c_master_controller_ops *ops; + struct device *rpm_dev; unsigned int secondary : 1; unsigned int init_done : 1; unsigned int hotjoin: 1; -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:36 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
When an IBI can be received after the controller is pm_runtime_put_autosuspend()'ed, the interrupt may occur just before the device is auto‑suspended. In such cases, the runtime PM core may not see any recent activity and may suspend the device earlier than intended. Mark the controller as last busy whenever an IBI is queued (when rpm_ibi_allowed is set) so that the auto-suspend delay correctly reflects recent bus activity and avoids premature suspension. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- drivers/i3c/master.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c index bcc493dc9d04..dcc07ebc50a2 100644 --- a/drivers/i3c/master.c +++ b/drivers/i3c/master.c @@ -2721,9 +2721,14 @@ static void i3c_master_unregister_i3c_devs(struct i3c_master_controller *master) */ void i3c_master_queue_ibi(struct i3c_dev_desc *dev, struct i3c_ibi_slot *slot) { + struct i3c_master_controller *master = i3c_dev_get_master(dev); + if (!dev->ibi || !slot) return; + if (master->rpm_ibi_allowed) + pm_runtime_mark_last_busy(master->rpm_dev); + atomic_inc(&dev->ibi->pending_ibis); queue_work(dev->ibi->wq, &slot->work); } -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:37 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
Some I3C controllers can be automatically runtime-resumed in order to handle in-band interrupts (IBIs), meaning that runtime suspend does not need to be blocked when IBIs are enabled. For example, a PCI-attached controller in a low-power state may generate a Power Management Event (PME) when the SDA line is pulled low to signal the START condition of an IBI. The PCI subsystem will then runtime-resume the device, allowing the IBI to be received without requiring the controller to remain active. Introduce a new quirk, HCI_QUIRK_RPM_IBI_ALLOWED, so that drivers can opt-in to this capability via driver data. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- drivers/i3c/master/mipi-i3c-hci/core.c | 3 +++ drivers/i3c/master/mipi-i3c-hci/hci.h | 1 + 2 files changed, 4 insertions(+) diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c index e925584113d1..ec4dbe64c35e 100644 --- a/drivers/i3c/master/mipi-i3c-hci/core.c +++ b/drivers/i3c/master/mipi-i3c-hci/core.c @@ -959,6 +959,9 @@ static int i3c_hci_probe(struct platform_device *pdev) if (hci->quirks & HCI_QUIRK_RPM_ALLOWED) i3c_hci_rpm_enable(&pdev->dev); + if (hci->quirks & HCI_QUIRK_RPM_IBI_ALLOWED) + hci->master.rpm_ibi_allowed = true; + return i3c_master_register(&hci->master, &pdev->dev, &i3c_hci_ops, false); } diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h index 6035f74212db..819328a85b84 100644 --- a/drivers/i3c/master/mipi-i3c-hci/hci.h +++ b/drivers/i3c/master/mipi-i3c-hci/hci.h @@ -146,6 +146,7 @@ struct i3c_hci_dev_data { #define HCI_QUIRK_OD_PP_TIMING BIT(3) /* Set OD and PP timings for AMD platforms */ #define HCI_QUIRK_RESP_BUF_THLD BIT(4) /* Set resp buf thld to 0 for AMD platforms */ #define HCI_QUIRK_RPM_ALLOWED BIT(5) /* Runtime PM allowed */ +#define HCI_QUIRK_RPM_IBI_ALLOWED BIT(6) /* IBI and Hot-Join allowed while runtime suspended */ /* global functions */ void mipi_i3c_hci_resume(struct i3c_hci *hci); -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:38 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
Some platforms implement the MIPI I3C HCI Multi-Bus Instance capability, where a single parent device hosts multiple I3C controller instances. In such designs, the parent - not the individual child instances - may need to coordinate runtime PM so that all controllers enter low-power states together, and all runtime suspend callbacks are invoked in a controlled and synchronized manner. For example, if the parent enables IBI-wakeup when transitioning into a low-power state, every bus instance must remain able to receive IBIs up until that point. This requires deferring the individual controllers’ runtime suspend callbacks (which disable bus activity) until the parent decides it is safe for all instances to suspend together. To support this usage model: * Export the controller's runtime PM suspend/resume callbacks so that the parent can invoke them directly. * Add a new quirk, HCI_QUIRK_RPM_PARENT_MANAGED, which designates the parent device as the controller’s runtime PM device (rpm_dev). When used without HCI_QUIRK_RPM_ALLOWED, this also prevents the child instance’s system-suspend callbacks from using pm_runtime_force_suspend()/pm_runtime_force_resume(), since runtime PM is managed entirely by the parent. * Move DEFAULT_AUTOSUSPEND_DELAY_MS into the header so it can be shared by parent-managed PM implementations. The new quirk allows platforms with multi-bus parent-managed PM infrastructure to correctly coordinate runtime PM across all I3C HCI instances. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- drivers/i3c/master/mipi-i3c-hci/core.c | 25 ++++++++++++++++--------- drivers/i3c/master/mipi-i3c-hci/hci.h | 6 ++++++ 2 files changed, 22 insertions(+), 9 deletions(-) diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c index ec4dbe64c35e..cb974b0f9e17 100644 --- a/drivers/i3c/master/mipi-i3c-hci/core.c +++ b/drivers/i3c/master/mipi-i3c-hci/core.c @@ -733,7 +733,7 @@ static int i3c_hci_reset_and_init(struct i3c_hci *hci) return 0; } -static int i3c_hci_runtime_suspend(struct device *dev) +int i3c_hci_runtime_suspend(struct device *dev) { struct i3c_hci *hci = dev_get_drvdata(dev); int ret; @@ -746,8 +746,9 @@ static int i3c_hci_runtime_suspend(struct device *dev) return 0; } +EXPORT_SYMBOL_GPL(i3c_hci_runtime_suspend); -static int i3c_hci_runtime_resume(struct device *dev) +int i3c_hci_runtime_resume(struct device *dev) { struct i3c_hci *hci = dev_get_drvdata(dev); int ret; @@ -768,6 +769,7 @@ static int i3c_hci_runtime_resume(struct device *dev) return 0; } +EXPORT_SYMBOL_GPL(i3c_hci_runtime_resume); static int i3c_hci_suspend(struct device *dev) { @@ -784,12 +786,14 @@ static int i3c_hci_resume_common(struct device *dev, bool rstdaa) struct i3c_hci *hci = dev_get_drvdata(dev); int ret; - if (!(hci->quirks & HCI_QUIRK_RPM_ALLOWED)) - return 0; + if (!(hci->quirks & HCI_QUIRK_RPM_PARENT_MANAGED)) { + if (!(hci->quirks & HCI_QUIRK_RPM_ALLOWED)) + return 0; - ret = pm_runtime_force_resume(dev); - if (ret) - return ret; + ret = pm_runtime_force_resume(dev); + if (ret) + return ret; + } ret = i3c_master_do_daa_ext(&hci->master, rstdaa); if (ret) @@ -812,8 +816,6 @@ static int i3c_hci_restore(struct device *dev) return i3c_hci_resume_common(dev, true); } -#define DEFAULT_AUTOSUSPEND_DELAY_MS 1000 - static void i3c_hci_rpm_enable(struct device *dev) { struct i3c_hci *hci = dev_get_drvdata(dev); @@ -962,6 +964,11 @@ static int i3c_hci_probe(struct platform_device *pdev) if (hci->quirks & HCI_QUIRK_RPM_IBI_ALLOWED) hci->master.rpm_ibi_allowed = true; + if (hci->quirks & HCI_QUIRK_RPM_PARENT_MANAGED) { + hci->master.rpm_dev = pdev->dev.parent; + hci->master.rpm_allowed = true; + } + return i3c_master_register(&hci->master, &pdev->dev, &i3c_hci_ops, false); } diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h index 819328a85b84..d0e7ad58ac15 100644 --- a/drivers/i3c/master/mipi-i3c-hci/hci.h +++ b/drivers/i3c/master/mipi-i3c-hci/hci.h @@ -147,6 +147,7 @@ struct i3c_hci_dev_data { #define HCI_QUIRK_RESP_BUF_THLD BIT(4) /* Set resp buf thld to 0 for AMD platforms */ #define HCI_QUIRK_RPM_ALLOWED BIT(5) /* Runtime PM allowed */ #define HCI_QUIRK_RPM_IBI_ALLOWED BIT(6) /* IBI and Hot-Join allowed while runtime suspended */ +#define HCI_QUIRK_RPM_PARENT_MANAGED BIT(7) /* Runtime PM managed by parent device */ /* global functions */ void mipi_i3c_hci_resume(struct i3c_hci *hci); @@ -156,4 +157,9 @@ void amd_set_od_pp_timing(struct i3c_hci *hci); void amd_set_resp_buf_thld(struct i3c_hci *hci); void i3c_hci_sync_irq_inactive(struct i3c_hci *hci); +#define DEFAULT_AUTOSUSPEND_DELAY_MS 1000 + +int i3c_hci_runtime_suspend(struct device *dev); +int i3c_hci_runtime_resume(struct device *dev); + #endif -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:39 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs), and they also implement the MIPI I3C HCI Multi-Bus Instance capability. When multiple I3C bus instances share the same PCI wakeup, the PCI parent must coordinate runtime PM so that all instances suspend together and their mipi-i3c-hci runtime suspend callbacks are invoked in a consistent manner. Enable IBI-based wakeup by setting HCI_QUIRK_RPM_IBI_ALLOWED for the intel-lpss-i3c platform device. Replace HCI_QUIRK_RPM_ALLOWED with HCI_QUIRK_RPM_PARENT_MANAGED so that the mipi-i3c-hci core driver expects runtime PM to be controlled by the PCI parent rather than by individual instances. For all Intel HCI PCI configurations, enable the corresponding control_instance_pm flag in the PCI driver. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- drivers/i3c/master/mipi-i3c-hci/core.c | 2 +- drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c index cb974b0f9e17..67ae7441ce97 100644 --- a/drivers/i3c/master/mipi-i3c-hci/core.c +++ b/drivers/i3c/master/mipi-i3c-hci/core.c @@ -992,7 +992,7 @@ static const struct acpi_device_id i3c_hci_acpi_match[] = { MODULE_DEVICE_TABLE(acpi, i3c_hci_acpi_match); static const struct platform_device_id i3c_hci_driver_ids[] = { - { .name = "intel-lpss-i3c", HCI_QUIRK_RPM_ALLOWED }, + { .name = "intel-lpss-i3c", HCI_QUIRK_RPM_IBI_ALLOWED | HCI_QUIRK_RPM_PARENT_MANAGED }, { /* sentinel */ } }; MODULE_DEVICE_TABLE(platform, i3c_hci_driver_ids); diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c index f7f776300a0f..2f72cf48e36c 100644 --- a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c +++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c @@ -200,6 +200,7 @@ static const struct mipi_i3c_hci_pci_info intel_mi_1_info = { .id = {0, 1}, .instance_offset = {0, 0x400}, .instance_count = 2, + .control_instance_pm = true, }; static const struct mipi_i3c_hci_pci_info intel_mi_2_info = { @@ -209,6 +210,7 @@ static const struct mipi_i3c_hci_pci_info intel_mi_2_info = { .id = {2, 3}, .instance_offset = {0, 0x400}, .instance_count = 2, + .control_instance_pm = true, }; static const struct mipi_i3c_hci_pci_info intel_si_2_info = { @@ -218,6 +220,7 @@ static const struct mipi_i3c_hci_pci_info intel_si_2_info = { .id = {2}, .instance_offset = {0}, .instance_count = 1, + .control_instance_pm = true, }; static int mipi_i3c_hci_pci_find_instance(struct mipi_i3c_hci_pci *hci, struct device *dev) -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:41 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
Some platforms implement the MIPI I3C HCI Multi-Bus Instance capability, where a single parent device hosts multiple I3C controller instances. In such designs, the parent - not the individual child instances - may need to coordinate runtime PM so that all controllers enter low-power states together, and all runtime suspend callbacks are invoked in a controlled and synchronized manner. For example, if the parent enables IBI-wakeup when transitioning into a low-power state, every bus instance must remain able to receive IBIs up until that point. This requires deferring the individual controllers’ runtime suspend callbacks (which disable bus activity) until the parent decides it is safe for all instances to suspend together. To support this usage model: * Add runtime PM and system PM callbacks in the PCI driver to invoke the mipi-i3c-hci driver’s runtime PM callbacks for each instance. * Introduce a driver-data flag, control_instance_pm, which opts into the new parent-managed PM behaviour. * Ensure the callbacks are only used when the corresponding instance is operational at suspend time. This is reliable because the operational state cannot change while the parent device is undergoing a PM transition, and PCI always performs a runtime resume before system suspend on current configurations, so that suspend and resume alternate irrespective of whether it is runtime or system PM. By that means, parent-managed runtime PM coordination for multi-instance MIPI I3C HCI PCI devices is provided without altering existing behaviour on platforms that do not require it. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> --- .../master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 154 +++++++++++++++++- 1 file changed, 150 insertions(+), 4 deletions(-) diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c index bc83caad4197..f7f776300a0f 100644 --- a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c +++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c @@ -9,6 +9,7 @@ #include <linux/acpi.h> #include <linux/bitfield.h> #include <linux/debugfs.h> +#include <linux/i3c/master.h> #include <linux/idr.h> #include <linux/iopoll.h> #include <linux/kernel.h> @@ -20,16 +21,24 @@ #include <linux/pm_qos.h> #include <linux/pm_runtime.h> +#include "hci.h" + /* * There can up to 15 instances, but implementations have at most 2 at this * time. */ #define INST_MAX 2 +struct mipi_i3c_hci_pci_instance { + struct device *dev; + bool operational; +}; + struct mipi_i3c_hci_pci { struct pci_dev *pci; void __iomem *base; const struct mipi_i3c_hci_pci_info *info; + struct mipi_i3c_hci_pci_instance instance[INST_MAX]; void *private; }; @@ -40,6 +49,7 @@ struct mipi_i3c_hci_pci_info { int id[INST_MAX]; u32 instance_offset[INST_MAX]; int instance_count; + bool control_instance_pm; }; #define INTEL_PRIV_OFFSET 0x2b0 @@ -210,14 +220,148 @@ static const struct mipi_i3c_hci_pci_info intel_si_2_info = { .instance_count = 1, }; -static void mipi_i3c_hci_pci_rpm_allow(struct device *dev) +static int mipi_i3c_hci_pci_find_instance(struct mipi_i3c_hci_pci *hci, struct device *dev) +{ + for (int i = 0; i < INST_MAX; i++) { + if (!hci->instance[i].dev) + hci->instance[i].dev = dev; + if (hci->instance[i].dev == dev) + return i; + } + + return -1; +} + +#define HC_CONTROL 0x04 +#define HC_CONTROL_BUS_ENABLE BIT(31) + +static bool __mipi_i3c_hci_pci_is_operational(struct device *dev) +{ + const struct mipi_i3c_hci_platform_data *pdata = dev->platform_data; + u32 hc_control = readl(pdata->base_regs + HC_CONTROL); + + return hc_control & HC_CONTROL_BUS_ENABLE; +} + +static bool mipi_i3c_hci_pci_is_operational(struct device *dev, bool update) +{ + struct mipi_i3c_hci_pci *hci = dev_get_drvdata(dev->parent); + int pos = mipi_i3c_hci_pci_find_instance(hci, dev); + + if (pos < 0) { + dev_err(dev, "%s: I3C instance not found\n", __func__); + return false; + } + + if (update) + hci->instance[pos].operational = __mipi_i3c_hci_pci_is_operational(dev); + + return hci->instance[pos].operational; +} + +struct mipi_i3c_hci_pci_pm_data { + struct device *dev[INST_MAX]; + int dev_cnt; +}; + +static bool mipi_i3c_hci_pci_is_mfd(struct device *dev) +{ + return dev_is_platform(dev) && mfd_get_cell(to_platform_device(dev)); +} + +static int mipi_i3c_hci_pci_suspend_instance(struct device *dev, void *data) +{ + struct mipi_i3c_hci_pci_pm_data *pm_data = data; + int ret; + + if (!mipi_i3c_hci_pci_is_mfd(dev) || + !mipi_i3c_hci_pci_is_operational(dev, true)) + return 0; + + ret = i3c_hci_runtime_suspend(dev); + if (ret) + return ret; + + pm_data->dev[pm_data->dev_cnt++] = dev; + + return 0; +} + +static int mipi_i3c_hci_pci_resume_instance(struct device *dev, void *data) { + struct mipi_i3c_hci_pci_pm_data *pm_data = data; + int ret; + + if (!mipi_i3c_hci_pci_is_mfd(dev) || + !mipi_i3c_hci_pci_is_operational(dev, false)) + return 0; + + ret = i3c_hci_runtime_resume(dev); + if (ret) + return ret; + + pm_data->dev[pm_data->dev_cnt++] = dev; + + return 0; +} + +static int mipi_i3c_hci_pci_suspend(struct device *dev) +{ + struct mipi_i3c_hci_pci *hci = dev_get_drvdata(dev); + struct mipi_i3c_hci_pci_pm_data pm_data = {}; + int ret; + + if (!hci->info->control_instance_pm) + return 0; + + ret = device_for_each_child_reverse(dev, &pm_data, mipi_i3c_hci_pci_suspend_instance); + if (ret) { + if (ret == -EAGAIN || ret == -EBUSY) + pm_runtime_mark_last_busy(&hci->pci->dev); + for (int i = 0; i < pm_data.dev_cnt; i++) + i3c_hci_runtime_resume(pm_data.dev[i]); + } + + return ret; +} + +static int mipi_i3c_hci_pci_resume(struct device *dev) +{ + struct mipi_i3c_hci_pci *hci = dev_get_drvdata(dev); + struct mipi_i3c_hci_pci_pm_data pm_data = {}; + int ret; + + if (!hci->info->control_instance_pm) + return 0; + + ret = device_for_each_child(dev, &pm_data, mipi_i3c_hci_pci_resume_instance); + if (ret) + for (int i = 0; i < pm_data.dev_cnt; i++) + i3c_hci_runtime_suspend(pm_data.dev[i]); + + return ret; +} + +static void mipi_i3c_hci_pci_rpm_allow(struct mipi_i3c_hci_pci *hci) +{ + struct device *dev = &hci->pci->dev; + + if (hci->info->control_instance_pm) { + pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY_MS); + pm_runtime_use_autosuspend(dev); + } + pm_runtime_put(dev); pm_runtime_allow(dev); } -static void mipi_i3c_hci_pci_rpm_forbid(struct device *dev) +static void mipi_i3c_hci_pci_rpm_forbid(struct mipi_i3c_hci_pci *hci) { + struct device *dev = &hci->pci->dev; + + if (hci->info->control_instance_pm) + pm_runtime_dont_use_autosuspend(dev); + pm_runtime_forbid(dev); pm_runtime_get_sync(dev); } @@ -299,7 +443,7 @@ static int mipi_i3c_hci_pci_probe(struct pci_dev *pci, pci_set_drvdata(pci, hci); - mipi_i3c_hci_pci_rpm_allow(&pci->dev); + mipi_i3c_hci_pci_rpm_allow(hci); return 0; @@ -316,13 +460,15 @@ static void mipi_i3c_hci_pci_remove(struct pci_dev *pci) if (hci->info->exit) hci->info->exit(hci); - mipi_i3c_hci_pci_rpm_forbid(&pci->dev); + mipi_i3c_hci_pci_rpm_forbid(hci); mfd_remove_devices(&pci->dev); } /* PM ops must exist for PCI to put a device to a low power state */ static const struct dev_pm_ops mipi_i3c_hci_pci_pm_ops = { + RUNTIME_PM_OPS(mipi_i3c_hci_pci_suspend, mipi_i3c_hci_pci_resume, NULL) + SYSTEM_SLEEP_PM_OPS(mipi_i3c_hci_pci_suspend, mipi_i3c_hci_pci_resume) }; static const struct pci_device_id mipi_i3c_hci_pci_devices[] = { -- 2.51.0
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 20:18:40 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Thu, Jan 29, 2026 at 08:18:35PM +0200, Adrian Hunter wrote: Reviewed-by: Frank Li <Frank.Li@nxp.com>
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Thu, 29 Jan 2026 14:43:45 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Thu, Jan 29, 2026 at 08:18:37PM +0200, Adrian Hunter wrote: look like this can't resolve problem. pm_runtime_mark_last_busy() just change dev->power.last_busy. If suspend before it, nothing happen. irq use thread irq, in irq thread call pm_runtime_resume() if needs. And this function call by irq handle, just put to work queue, what's impact if do nothing here? Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Thu, 29 Jan 2026 14:56:01 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Thu, Jan 29, 2026 at 08:18:39PM +0200, Adrian Hunter wrote: Does your hardware support recieve IBI when runtime suspend? Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Thu, 29 Jan 2026 15:00:14 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On 29/01/2026 22:00, Frank Li wrote: When runtime suspended (in D3), the hardware first triggers a Power Management Event (PME) when the SDA line is pulled low to signal the START condition of an IBI. The PCI subsystem will then runtime-resume the device. When the bus is enabled, the clock is started and the IBI is received.
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 22:28:14 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On 29/01/2026 21:56, Frank Li wrote: It should be effective. rpm_suspend() recalculates the autosuspend expiry time based on last_busy (see pm_runtime_autosuspend_expiration()) and restarts the timer is it is in the future. Just premature runtime suspension inconsistent with autosuspend_delay.
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Thu, 29 Jan 2026 22:42:32 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Thu, Jan 29, 2026 at 10:42:32PM +0200, Adrian Hunter wrote: CPU 0 CPU 1 1. rpm_suspend() 2. pm_runtime_mark_last_busy(master->rpm_dev) if 2 happen before 1, it can extend suspend. 2 happen after 1, it should do nothing. Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Thu, 29 Jan 2026 15:55:40 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Thu, Jan 29, 2026 at 10:28:14PM +0200, Adrian Hunter wrote: It align my assumption, why need complex solution. SDA->PME->IRQ should handle by hardware, so irq handle queue IBI to working queue. IBI work will try do transfer, which will call runtime resume(), then transfer data. What's issue? Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Thu, 29 Jan 2026 16:00:20 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On 29/01/2026 23:00, Frank Li wrote: The PME indicates I3C START (SDA line pulled low). The controller is in a low power state unable to operate the bus. At this point it is not known what I3C device has pulled down the SDA line, or even if it is an IBI since it is indistinguishable from hot-join at this point. The PCI PME IRQ is not the device's IRQ. It is handled by acpi_irq() which ultimately informs the PCI subsystem to wake the PCI device. The PCI subsystem performs pm_request_resume(), refer pci_acpi_wake_dev(). When the controller is resumed, it enables the I3C bus and the IBI is finally delivered normally. However, none of that is related to this patch. This patch is because the PCI device has 2 I3C bus instances and only 1 PME wakeup. The PME becomes active when the PCI device is put to a low power state. Both I3C bus instances must be runtime suspended then. Similarly, upon resume the PME is no longer active, so both I3C bus instances must make their buses operational - we don't know which may have received an IBI. And there may be further IBIs which can't be received unless the associated bus is operational. The PCI device is no longer in a low power state, so there will be no PME in that case.
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Fri, 30 Jan 2026 09:00:33 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On 29/01/2026 22:55, Frank Li wrote: 2 happening after 1 is a separate issue. It will never happen in the wakeup case because the wakeup does a runtime resume: pm_runtime_put_autosuspend() IBI -> pm_runtime_mark_last_busy() another IBI -> pm_runtime_mark_last_busy() and so on <autosuspend_delay finally elapses> rpm_suspend() -> device suspended, PME activated IBI START -> PME -> pm_request_resume() IBI is delivered after controller runtime resumes
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Fri, 30 Jan 2026 09:48:07 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Fri, Jan 30, 2026 at 09:00:33AM +0200, Adrian Hunter wrote: One instance 1 suspend, instance 2 running, PME is inactive, what's happen if instance 1 request IBI? IBI will be missed? Does PME active auto by hardware or need software config? Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Fri, 30 Jan 2026 10:04:24 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On 30/01/2026 17:04, Frank Li wrote: Nothing will happen. Instance 1 I3C bus is not operational and there can be no PME when the PCI device is not in a low power state (D3hot) Possibly not if instance 1 is eventually resumed and the I3C device requesting the IBI has not yet given up. PCI devices (hardware) advertise their PME capability in terms of which states are capable of PMEs. Currently the Intel LPSS I3C device lists only D3hot. The PCI subsystem (software) automatically enables the PME before runtime suspend if the target power state allows it.
{ "author": "Adrian Hunter <adrian.hunter@intel.com>", "date": "Fri, 30 Jan 2026 18:34:37 +0200", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Fri, Jan 30, 2026 at 06:34:37PM +0200, Adrian Hunter wrote: Okay, I think I understand your situation, let me check patch again. Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Fri, 30 Jan 2026 12:11:19 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
Hi Here are patches related to enabling IBI while runtime suspended for Intel controllers. Intel LPSS I3C controllers can wake from runtime suspend to receive in-band interrupts (IBIs). It is non-trivial to implement because the parent PCI device has 2 I3C bus instances (MIPI I3C HCI Multi-Bus Instance capability) represented by platform devices with a separate driver, but the IBI-wakeup is shared by both, which means runtime PM has to be managed by the parent PCI driver. To make that work, the PCI driver handles runtime PM, but leverages the mipi-i3c-hci platform driver's functionality for saving and restoring controller state. Adrian Hunter (7): i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers i3c: master: Allow controller drivers to select runtime PM device i3c: master: Mark last_busy on IBI when runtime PM is allowed i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended i3c: mipi-i3c-hci: Allow parent to manage runtime PM i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers drivers/i3c/master.c | 14 +- drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++-- drivers/i3c/master/mipi-i3c-hci/hci.h | 7 + drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++- include/linux/i3c/master.h | 2 + 5 files changed, 194 insertions(+), 17 deletions(-) Regards Adrian
On Fri, Jan 30, 2026 at 06:34:37PM +0200, Adrian Hunter wrote: Does your device Hierarchy look like PCI device | ----------------- HCI1 HCI2 | | I3C M1 I3C M2 You want HCI1 and HCI2 suspened only when both HCI1 and HCI2 can enter RM time suspend status? Device Link can link two devices, but not sure if it can handle cyclic case, HCI1 and HCI2 depend each other. Or you create common power domain for HCI1 and HCI2, in power domain to handle suspend. It'd better ask run time pm owner to provide better suggestion. Frank
{ "author": "Frank Li <Frank.li@nxp.com>", "date": "Mon, 2 Feb 2026 11:25:23 -0500", "thread_id": "aYDP847mgleQBF5Y@lizhi-Precision-Tower-5810.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
The SEV-SNP IBPB-on-Entry feature does not require a guest-side implementation. The feature was added in Zen5 h/w, after the first SNP Zen implementation, and thus was not accounted for when the initial set of SNP features were added to the kernel. In its abundant precaution, commit 8c29f0165405 ("x86/sev: Add SEV-SNP guest feature negotiation support") included SEV_STATUS' IBPB-on-Entry bit as a reserved bit, thereby masking guests from using the feature. Unmask the bit, to allow guests to take advantage of the feature on hypervisor kernel versions that support it: Amend the SEV_STATUS MSR SNP_RESERVED_MASK to exclude bit 23 (IbpbOnEntry). Fixes: 8c29f0165405 ("x86/sev: Add SEV-SNP guest feature negotiation support") Cc: Nikunj A Dadhania <nikunj@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> CC: Borislav Petkov (AMD) <bp@alien8.de> CC: Michael Roth <michael.roth@amd.com> Cc: stable@kernel.org Signed-off-by: Kim Phillips <kim.phillips@amd.com> --- arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index c8c1464b3a56..2b639703b8dd 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -188,6 +188,7 @@ bool sev_es_check_ghcb_fault(unsigned long address) MSR_AMD64_SNP_RESERVED_BIT13 | \ MSR_AMD64_SNP_RESERVED_BIT15 | \ MSR_AMD64_SNP_SECURE_AVIC | \ + MSR_AMD64_SNP_RESERVED_BITS19_22 | \ MSR_AMD64_SNP_RESERVED_MASK) #ifdef CONFIG_AMD_SECURE_AVIC diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c index 9ae3b11754e6..13f608117411 100644 --- a/arch/x86/coco/sev/core.c +++ b/arch/x86/coco/sev/core.c @@ -122,6 +122,7 @@ static const char * const sev_status_feat_names[] = { [MSR_AMD64_SNP_VMSA_REG_PROT_BIT] = "VMSARegProt", [MSR_AMD64_SNP_SMT_PROT_BIT] = "SMTProt", [MSR_AMD64_SNP_SECURE_AVIC_BIT] = "SecureAVIC", + [MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT] = "IBPBOnEntry", }; /* diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 4d3566bb1a93..9016a6b00bc7 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -735,7 +735,10 @@ #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) #define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 #define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) -#define MSR_AMD64_SNP_RESV_BIT 19 +#define MSR_AMD64_SNP_RESERVED_BITS19_22 GENMASK_ULL(22, 19) +#define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23 +#define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT) +#define MSR_AMD64_SNP_RESV_BIT 24 #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) #define MSR_AMD64_SAVIC_CONTROL 0xc0010138 #define MSR_AMD64_SAVIC_EN_BIT 0 -- 2.43.0
{ "author": "Kim Phillips <kim.phillips@amd.com>", "date": "Mon, 26 Jan 2026 16:42:04 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. SNP guests may choose to enable IBPB-on-Entry by setting SEV_FEATURES bit 21 (IbpbOnEntry). Host support for IBPB on Entry is indicated by CPUID Fn8000_001F[IbpbOnEntry], bit 31. If supported, indicate support for IBPB on Entry in sev_supported_vmsa_features bit 23 (IbpbOnEntry). For more info, refer to page 615, Section 15.36.17 "Side-Channel Protection", AMD64 Architecture Programmer's Manual Volume 2: System Programming Part 2, Pub. 24593 Rev. 3.42 - March 2024 (see Link). Link: https://bugzilla.kernel.org/attachment.cgi?id=306250 Signed-off-by: Kim Phillips <kim.phillips@amd.com> --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index c01fdde465de..3ce5dff36f78 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -459,6 +459,7 @@ #define X86_FEATURE_ALLOWED_SEV_FEATURES (19*32+27) /* Allowed SEV Features */ #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */ #define X86_FEATURE_HV_INUSE_WR_ALLOWED (19*32+30) /* Allow Write to in-use hypervisor-owned pages */ +#define X86_FEATURE_IBPB_ON_ENTRY (19*32+31) /* SEV-SNP IBPB on VM Entry */ /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */ #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index edde36097ddc..eebc65ec948f 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -306,6 +306,7 @@ static_assert((X2AVIC_4K_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AV #define SVM_SEV_FEAT_ALTERNATE_INJECTION BIT(4) #define SVM_SEV_FEAT_DEBUG_SWAP BIT(5) #define SVM_SEV_FEAT_SECURE_TSC BIT(9) +#define SVM_SEV_FEAT_IBPB_ON_ENTRY BIT(21) #define VMCB_ALLOWED_SEV_FEATURES_VALID BIT_ULL(63) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index ea515cf41168..8a6d25db0c00 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3165,8 +3165,15 @@ void __init sev_hardware_setup(void) cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP)) sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP; - if (sev_snp_enabled && tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC)) + if (!sev_snp_enabled) + return; + /* the following feature bit checks are SNP specific */ + + if (tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC)) sev_supported_vmsa_features |= SVM_SEV_FEAT_SECURE_TSC; + + if (cpu_feature_enabled(X86_FEATURE_IBPB_ON_ENTRY)) + sev_supported_vmsa_features |= SVM_SEV_FEAT_IBPB_ON_ENTRY; } void sev_hardware_unsetup(void) -- 2.43.0
{ "author": "Kim Phillips <kim.phillips@amd.com>", "date": "Mon, 26 Jan 2026 16:42:05 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/27/2026 4:12 AM, Kim Phillips wrote: The subject line should have the prefix "x86/sev" instead of "KVM: SEV". The below subject line would be more appropriate: x86/sev: Allow IBPB-on-Entry feature for SNP guests Apart from the above comments: Reviewed-by: Nikunj A Dadhania <nikunj@amd.com>
{ "author": "\"Nikunj A. Dadhania\" <nikunj@amd.com>", "date": "Tue, 27 Jan 2026 11:49:07 +0530", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/27/2026 4:12 AM, Kim Phillips wrote: The early return seems to split up the SNP features unnecessarily. Keeping everything under `if (sev_snp_enabled)` is cleaner IMO - it's clear that these features belong together. Plus, when someone adds the next SNP feature, they won't have to think about whether it goes before or after the return. The comment about "SNP specific" features becomes redundant as well. Regards, Nikunj
{ "author": "\"Nikunj A. Dadhania\" <nikunj@amd.com>", "date": "Tue, 27 Jan 2026 12:08:27 +0530", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/27/26 12:38 AM, Nikunj A. Dadhania wrote: Hi Nikunj, The SNP 'togetherness' semantics are maintained whether under an 'if (sev_snp_enabled)' body, or after an 'if (!sev_snp_enabled) return;'. Only SNP-specific things are being done in the trailing part of the function, so it naturally lends itself to do the early return. It makes it more readable by eliminating the unnecessary indentation created by an 'if (sev_snp_enabled)' body. Meanwhile, I agree with your comments on the first patch in the series. Thanks for your review, Kim
{ "author": "Kim Phillips <kim.phillips@amd.com>", "date": "Tue, 27 Jan 2026 14:56:02 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/26/26 16:42, Kim Phillips wrote: With the change to the subject line... Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
{ "author": "Tom Lendacky <thomas.lendacky@amd.com>", "date": "Wed, 28 Jan 2026 13:02:37 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/26/26 16:42, Kim Phillips wrote: Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
{ "author": "Tom Lendacky <thomas.lendacky@amd.com>", "date": "Wed, 28 Jan 2026 13:08:49 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On Mon, Jan 26, 2026 at 04:42:04PM -0600, Kim Phillips wrote: Do not explain what the patch does. I guess... Why isn't this part of SNP_FEATURES_PRESENT? If this feature doesn't require guest-side support, then it is trivially present, no? I guess this is a fix of sorts and I could take it in now once all review comments have been addressed... -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
{ "author": "Borislav Petkov <bp@alien8.de>", "date": "Wed, 28 Jan 2026 20:23:12 +0100", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
Hi Boris, On 1/28/26 1:23 PM, Borislav Petkov wrote: For that last paragraph, how about: "Allow guests to make use of IBPB-on-Entry when supported by the hypervisor, as the bit is now architecturally defined and safe to expose." ? Hopefully a bitfield will be carved out for these no-explicit-guest-implementation-required bits by hardware such that we won't need to do this again. SNP_FEATURES_PRESENT is for the non-trivial variety: Its bits get set as part of the patchseries that add the explicit guest support *code*. I believe 'features' like PREVENT_HOST_IBS are similar in this regard. Cool, thanks. Kim
{ "author": "Kim Phillips <kim.phillips@amd.com>", "date": "Wed, 28 Jan 2026 18:38:29 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On Wed, Jan 28, 2026 at 06:38:29PM -0600, Kim Phillips wrote: Better. Yes, and I'm asking why can't SNP_FEATURES_PRESENT contain *all* SNP features? -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
{ "author": "Borislav Petkov <bp@alien8.de>", "date": "Thu, 29 Jan 2026 11:51:16 +0100", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/29/26 4:51 AM, Borislav Petkov wrote: Not *all* SNP features are implemented in all guest kernel versions, and, well, for those that don't require explicit guest code support, perhaps it's because they aren't necessarily well defined and validated in all hardware versions... Kim
{ "author": "Kim Phillips <kim.phillips@amd.com>", "date": "Thu, 29 Jan 2026 16:32:49 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On Thu, Jan 29, 2026 at 04:32:49PM -0600, Kim Phillips wrote: Ok, can you add *this* feature to SNP_FEATURES_PRESENT? If not, why not? -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
{ "author": "Borislav Petkov <bp@alien8.de>", "date": "Fri, 30 Jan 2026 13:32:52 +0100", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/30/26 06:32, Borislav Petkov wrote: It can be added. Any of the features added to SNP_FEATURES_PRESENT that aren't set in the SNP_FEATURES_IMPL_REQ bitmap are really a no-op. The SNP_FEATURES_PRESENT bitmap is meant to contain whatever bits are set in SNP_FEATURES_IMPL_REQ when an implementation has been implemented for the guest. But, yeah, we could add all the bits that aren't set in SNP_FEATURES_IMPL_REQ to SNP_FEATURES_PRESENT if it makes it clearer. If we do that, it should probably be a separate patch (?) that also rewords the comment above SNP_FEATURES_PRESENT Thanks, Tom
{ "author": "Tom Lendacky <thomas.lendacky@amd.com>", "date": "Fri, 30 Jan 2026 08:56:07 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On Fri, Jan 30, 2026 at 08:56:07AM -0600, Tom Lendacky wrote: Right, that's the question. SNP_FEATURES_PRESENT is used in the masking operation to get the unsupported features. But when we say a SNP feature is present, then, even if it doesn't need guest implementation, that feature is still present nonetheless. So our nomenclature is kinda imprecise here. I'd say, we can always rename SNP_FEATURES_PRESENT to denote what it is there for, i.e., the narrower functionality of the masking. Or, if we want to gather there *all* features that are present, then we can start adding them... ... yes, as a separate patch. Question is, what do we really wanna do here? Does it make sense and is it useful to have SNP_FEATURES_PRESENT contain *all* guest SNP features... Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
{ "author": "Borislav Petkov <bp@alien8.de>", "date": "Fri, 30 Jan 2026 16:45:34 +0100", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 1/30/26 09:45, Borislav Petkov wrote: I guess it really depends on the persons point of view. I agree that renaming the SNP_FEATURES_PRESENT to SNP_FEATURES_IMPL(EMENTED) would match up nicely with SNP_FEATURES_IMPL_REQ. Maybe that's all that is needed... Thanks, Tom
{ "author": "Tom Lendacky <thomas.lendacky@amd.com>", "date": "Mon, 2 Feb 2026 09:38:50 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On Mon, Feb 02, 2026 at 09:38:50AM -0600, Tom Lendacky wrote: I guess... I still think it would be useful to have a common place that says which things in SEV_STATUS are supported and present in a guest, no? Or are we going to dump that MSR like Joerg's patch from a while ago and that'll tell us what the guest supports? Hmm. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
{ "author": "Borislav Petkov <bp@alien8.de>", "date": "Mon, 2 Feb 2026 16:49:36 +0100", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On 2/2/26 09:49, Borislav Petkov wrote: But I can see that getting stale because it isn't required to be updated for features that don't require an implementation in order for the guest to boot successfully. Whereas the SNP_FEATURES_IMPL_REQ is set with known values that require an implementation and all the reserved bits set. So it takes actual updating to get one of those features to work that are represented in that bitmap. That will tell us what the guest is running with, not what it can run with. Thanks, Tom
{ "author": "Tom Lendacky <thomas.lendacky@amd.com>", "date": "Mon, 2 Feb 2026 10:09:19 -0600", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH 0/2] KVM: SEV: Add support for IBPB-on-Entry
AMD EPYC 5th generation and above processors support IBPB-on-Entry for SNP guests. By invoking an Indirect Branch Prediction Barrier (IBPB) on VMRUN, old indirect branch predictions are prevented from influencing indirect branches within the guest. The first patch is guest-side support which unmasks the Zen5+ feature bit to allow kernel guests to set the feature. The second patch is host-side support that checks the CPUID and then sets the feature bit in the VMSA supported features mask. Based on https://github.com/kvm-x86/linux kvm-x86/next (kvm-x86-next-2026.01.23, e81f7c908e16). This series also available here: https://github.com/AMDESE/linux/tree/ibpb-on-entry-latest Advance qemu bits (to add ibpb-on-entry=on/off switch) available here: https://github.com/AMDESE/qemu/tree/ibpb-on-entry-latest Qemu bits will be posted upstream once kernel bits are merged. They depend on Naveen Rao's "target/i386: SEV: Add support for enabling VMSA SEV features": https://lore.kernel.org/qemu-devel/cover.1761648149.git.naveen@kernel.org/ Kim Phillips (2): KVM: SEV: IBPB-on-Entry guest support KVM: SEV: Add support for IBPB-on-Entry arch/x86/boot/compressed/sev.c | 1 + arch/x86/coco/sev/core.c | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 5 ++++- arch/x86/include/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++++- 6 files changed, 16 insertions(+), 2 deletions(-) base-commit: e81f7c908e1664233974b9f20beead78cde6343a -- 2.43.0
On Mon, Feb 02, 2026 at 10:09:19AM -0600, Tom Lendacky wrote: Ok, I guess we can rename that define SNP_FEATURES_IMPL to denote is the counterpart of SNP_FEATURES_IMPL_REQ, so to speak. @Kim, you can send a new version with the define renamed. Due to it being too close to the merge window, it'll wait for after and then it can go to stable later but I don't think that's a problem. hm, ok, let's think about this more then. I don't have a clear use case for a this-is-what-a-SNP-guest-can-run-with so let's deal with that later... Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette
{ "author": "Borislav Petkov <bp@alien8.de>", "date": "Mon, 2 Feb 2026 18:12:23 +0100", "thread_id": "20260202171223.GBaYDa9z7sKO9q3Q9a@fat_crate.local.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
On Mon, Feb 02, 2026 at 12:03:11PM +0200, Bogdan Sandu wrote: Was this an AI generated patch? Either way, it needs to be properly broken up into "one logical change per patch" like all others. thanks, greg k-h
{ "author": "Greg KH <gregkh@linuxfoundation.org>", "date": "Mon, 2 Feb 2026 11:14:26 +0100", "thread_id": "2026020258-very-numbly-b36b@gregkh.mbox.gz" }
lkml
[PATCH] Cleanup ipu3 driver
Clean up warnings generated by ./scripts/checkpatch.pl regarding the ipu3 driver at /drivers/staging/media/ipu3 More specifically, the following files have been affected: ipu3-css.c, ipu3-mmu.c, ipu3-mmu.h, ipu3-v4l2.c, ipu3.c, ipu3.h Signed-off-by: Bogdan Sandu <bogdanelsandu2011@gmail.com> --- drivers/staging/media/ipu3/ipu3-css.c | 39 ++++++++++++-------------- drivers/staging/media/ipu3/ipu3-mmu.c | 2 +- drivers/staging/media/ipu3/ipu3-mmu.h | 4 ++- drivers/staging/media/ipu3/ipu3-v4l2.c | 11 ++++---- drivers/staging/media/ipu3/ipu3.c | 7 ++--- 5 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/staging/media/ipu3/ipu3-css.c b/drivers/staging/media/ipu3/ipu3-css.c index 777cac1c2..832581547 100644 --- a/drivers/staging/media/ipu3/ipu3-css.c +++ b/drivers/staging/media/ipu3/ipu3-css.c @@ -118,7 +118,8 @@ static const struct { /* Initialize queue based on given format, adjust format as needed */ static int imgu_css_queue_init(struct imgu_css_queue *queue, - struct v4l2_pix_format_mplane *fmt, u32 flags) + struct v4l2_pix_format_mplane *fmt, + u32 flags) { struct v4l2_pix_format_mplane *const f = &queue->fmt.mpix; unsigned int i; @@ -1033,8 +1034,8 @@ static int imgu_css_pipeline_init(struct imgu_css *css, unsigned int pipe) 3 * cfg_dvs->num_horizontal_blocks / 2 * cfg_dvs->num_vertical_blocks) || imgu_css_pool_init(imgu, &css_pipe->pool.obgrid, - imgu_css_fw_obgrid_size( - &css->fwp->binary_header[css_pipe->bindex]))) + imgu_css_fw_obgrid_size + (&css->fwp->binary_header[css_pipe->bindex]))) goto out_of_memory; for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) @@ -1225,8 +1226,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) for (j = IMGU_ABI_PARAM_CLASS_CONFIG; j < IMGU_ABI_PARAM_CLASS_NUM; j++) for (i = 0; i < IMGU_ABI_NUM_MEMORIES; i++) { - if (imgu_css_dma_buffer_resize( - imgu, + if (imgu_css_dma_buffer_resize(imgu, &css_pipe->binary_params_cs[j - 1][i], bi->info.isp.sp.mem_initializers.params[j][i].size)) goto out_of_memory; @@ -1241,6 +1241,7 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].height, IMGU_DVS_BLOCK_H) + 2 * IMGU_GDC_BUF_Y; + h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].height; w = ALIGN(css_pipe->rect[IPU3_CSS_RECT_BDS].width, 2 * IPU3_UAPI_ISP_VEC_ELEMS) + 2 * IMGU_GDC_BUF_X; @@ -1248,10 +1249,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].bytesperpixel * w; size = w * h * BYPC + (w / 2) * (h / 2) * BYPC * 2; for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_REF].mem[i], + size)) goto out_of_memory; /* TNR frames for temporal noise reduction, FRAME_FORMAT_YUV_LINE */ @@ -1269,10 +1269,9 @@ static int imgu_css_binary_setup(struct imgu_css *css, unsigned int pipe) h = css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].height; size = w * ALIGN(h * 3 / 2 + 3, 2); /* +3 for vf_pp prefetch */ for (i = 0; i < IPU3_CSS_AUX_FRAMES; i++) - if (imgu_css_dma_buffer_resize( - imgu, - &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], - size)) + if (imgu_css_dma_buffer_resize(imgu, + &css_pipe->aux_frames[IPU3_CSS_AUX_FRAME_TNR].mem[i], + size)) goto out_of_memory; return 0; @@ -2036,7 +2035,7 @@ struct imgu_css_buffer *imgu_css_buf_dequeue(struct imgu_css *css) struct imgu_css_buffer, list); if (queue != b->queue || daddr != css_pipe->abi_buffers - [b->queue][b->queue_pos].daddr) { + [b->queue][b->queue_pos].daddr) { spin_unlock(&css_pipe->qlock); dev_err(css->dev, "dequeued bad buffer 0x%x\n", daddr); return ERR_PTR(-EIO); @@ -2169,7 +2168,7 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, map = imgu_css_pool_last(&css_pipe->pool.acc, 1); /* user acc */ r = imgu_css_cfg_acc(css, pipe, use, acc, map->vaddr, - set_params ? &set_params->acc_param : NULL); + set_params ? &set_params->acc_param : NULL); if (r < 0) goto fail; } @@ -2298,13 +2297,11 @@ int imgu_css_set_parameters(struct imgu_css *css, unsigned int pipe, if (obgrid) imgu_css_pool_put(&css_pipe->pool.obgrid); if (vmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_VMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_VMEM0]); if (dmem0) - imgu_css_pool_put( - &css_pipe->pool.binary_params_p - [IMGU_ABI_MEM_ISP_DMEM0]); + imgu_css_pool_put(&css_pipe->pool.binary_params_p + [IMGU_ABI_MEM_ISP_DMEM0]); fail_no_put: return r; diff --git a/drivers/staging/media/ipu3/ipu3-mmu.c b/drivers/staging/media/ipu3/ipu3-mmu.c index cb9bf5fb2..95ce34ad8 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.c +++ b/drivers/staging/media/ipu3/ipu3-mmu.c @@ -21,7 +21,7 @@ #include "ipu3-mmu.h" #define IPU3_PT_BITS 10 -#define IPU3_PT_PTES (1UL << IPU3_PT_BITS) +#define IPU3_PT_PTES (BIT(IPU3_PT_BITS)) #define IPU3_PT_SIZE (IPU3_PT_PTES << 2) #define IPU3_PT_ORDER (IPU3_PT_SIZE >> PAGE_SHIFT) diff --git a/drivers/staging/media/ipu3/ipu3-mmu.h b/drivers/staging/media/ipu3/ipu3-mmu.h index a5f0bca7e..990482f10 100644 --- a/drivers/staging/media/ipu3/ipu3-mmu.h +++ b/drivers/staging/media/ipu3/ipu3-mmu.h @@ -5,8 +5,10 @@ #ifndef __IPU3_MMU_H #define __IPU3_MMU_H +#include <linux/bitops.h> + #define IPU3_PAGE_SHIFT 12 -#define IPU3_PAGE_SIZE (1UL << IPU3_PAGE_SHIFT) +#define IPU3_PAGE_SIZE (BIT(IPU3_PAGE_SHIFT)) /** * struct imgu_mmu_info - Describes mmu geometry diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c index 2f6041d34..8ebfcddab 100644 --- a/drivers/staging/media/ipu3/ipu3-v4l2.c +++ b/drivers/staging/media/ipu3/ipu3-v4l2.c @@ -245,9 +245,9 @@ static int imgu_subdev_set_selection(struct v4l2_subdev *sd, struct v4l2_rect *rect; dev_dbg(&imgu->pci_dev->dev, - "set subdev %u sel which %u target 0x%4x rect [%ux%u]", - imgu_sd->pipe, sel->which, sel->target, - sel->r.width, sel->r.height); + "set subdev %u sel which %u target 0x%4x rect [%ux%u]", + imgu_sd->pipe, sel->which, sel->target, + sel->r.width, sel->r.height); if (sel->pad != IMGU_NODE_IN) return -EINVAL; @@ -288,7 +288,7 @@ static int imgu_link_setup(struct media_entity *entity, WARN_ON(pad >= IMGU_NODE_NUM); dev_dbg(&imgu->pci_dev->dev, "pipe %u pad %u is %s", pipe, pad, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); imgu_pipe = &imgu->imgu_pipe[pipe]; imgu_pipe->nodes[pad].enabled = flags & MEDIA_LNK_FL_ENABLED; @@ -303,7 +303,7 @@ static int imgu_link_setup(struct media_entity *entity, __clear_bit(pipe, imgu->css.enabled_pipes); dev_dbg(&imgu->pci_dev->dev, "pipe %u is %s", pipe, - str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); + str_enabled_disabled(flags & MEDIA_LNK_FL_ENABLED)); return 0; } @@ -750,7 +750,6 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node, } else { fmts[i] = &imgu_pipe->nodes[inode].vdev_fmt.fmt.pix_mp; } - } if (!try) { diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c index bdf5a4577..fe343d368 100644 --- a/drivers/staging/media/ipu3/ipu3.c +++ b/drivers/staging/media/ipu3/ipu3.c @@ -151,7 +151,7 @@ static int imgu_dummybufs_init(struct imgu_device *imgu, unsigned int pipe) /* May be called from atomic context */ static struct imgu_css_buffer *imgu_dummybufs_get(struct imgu_device *imgu, - int queue, unsigned int pipe) + int queue, unsigned int pipe) { unsigned int i; struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe]; @@ -556,8 +556,7 @@ static irqreturn_t imgu_isr_threaded(int irq, void *imgu_ptr) buf->vid_buf.vbb.vb2_buf.timestamp = ns; buf->vid_buf.vbb.field = V4L2_FIELD_NONE; buf->vid_buf.vbb.sequence = - atomic_inc_return( - &imgu_pipe->nodes[node].sequence); + atomic_inc_return(&imgu_pipe->nodes[node].sequence); dev_dbg(&imgu->pci_dev->dev, "vb2 buffer sequence %d", buf->vid_buf.vbb.sequence); } @@ -774,7 +773,7 @@ static int __maybe_unused imgu_suspend(struct device *dev) synchronize_irq(pci_dev->irq); /* Wait until all buffers in CSS are done. */ if (!wait_event_timeout(imgu->buf_drain_wq, - imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) + imgu_css_queue_empty(&imgu->css), msecs_to_jiffies(1000))) dev_err(dev, "wait buffer drain timeout.\n"); imgu_css_stop_streaming(&imgu->css); -- 2.51.0
I can assure you, it is not AI-generated. per patch" like all others. Understood. I'll resend it afterwards. Thank you for your patience.
{ "author": "Bogdan Sandu <bogdanelsandu2011@gmail.com>", "date": "Mon, 2 Feb 2026 12:18:43 +0200", "thread_id": "2026020258-very-numbly-b36b@gregkh.mbox.gz" }