id stringlengths 10 10 | number int64 1 25.6k | forum stringlengths 10 10 | title stringlengths 5 214 | abstract stringlengths 26 4.31k | content_TLDR stringlengths 1 250 ⌀ | content_keywords stringlengths 6 1.02k | content_pdf stringlengths 49 49 | content_primary_area stringclasses 21
values | content_supplementary_material stringlengths 56 56 ⌀ | signatures stringlengths 47 51 |
|---|---|---|---|---|---|---|---|---|---|---|
7QaXJE5nfU | 25,054 | 7QaXJE5nfU | SupCL-GSS: Supervised Contrastive Learning with Guided Sample Selection | We present Supervised Contrastive Learning with Guided Sample Selection (SupCL-GSS), that leverages data maps to construct "hard" positives and "hard" negatives for text classification on pre-trained language models. In our method, we first measure training dynamics to identify the learning difficulty of each training ... | SupCL-GSS guides supervised contrastive learning with data-map–based difficulty to form hard positives/negatives by label and difficulty, improving accuracy and calibration (lower ECE) across diverse in-/out-of-domain NLP tasks. | ['Supervised Contrastive Learning', 'Hard Negatives', 'Model Calibration'] | /pdf/9a86a465d5d34721067f18a5e7fc43126aa691bf.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25054/Authors'] |
3yKCsXUso2 | 25,053 | 3yKCsXUso2 | StoRM: Stochastic Region Mixup | A number of data-augmentation strategies have been proposed to alleviate problems such as over-fitting, distribution shifts, and adversarial attacks in deep neural networks. A growing body of literature has investigated computationally expensive techniques like inclusion of saliency cues, diffusion processes or even fr... | null | ['Mixup', 'Data augmentation', 'Vicinal Risk Minimization'] | /pdf/4913903037308bd4cdf1decc613094744db5f872.pdf | other topics in machine learning (i.e., none of the above) | /attachment/a551e75e2e38ddf57c75ec040725095715ba9f1a.pdf | ['ICLR.cc/2026/Conference/Submission25053/Authors'] |
bEejbORUI5 | 25,052 | bEejbORUI5 | ExPO-HM: Learning to Explain-then-Detect for Hateful Meme Detection | Hateful memes have emerged as a particularly challenging form of online abuse, motivating the development of automated detection systems. Most prior approaches rely on direct detection, producing only binary predictions. Such models fail to provide the context and explanations that real-world moderation requires. Recen... | null | ['Hateful Meme Detection'] | /pdf/90e3a793aea9fd2b6a906314e91e8de346b5806e.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25052/Authors'] |
mbu8EEnp3a | 25,050 | mbu8EEnp3a | Do LLMs Signal When They’re Right? Evidence from Neuron Agreement | Large language models (LLMs) commonly boost reasoning via sample-evaluate-ensemble decoders (e.g., majority voting), achieving label free gains without ground truth. However, prevailing strategies score candidates using only external outputs such as token probabilities, entropies, or self evaluations, and these signals... | null | ['Neuron-Agreement Decoding (NAD); Neuron activation patterns; Unsupervised answer selection; Chain-of-thought ensembling; Token efficiency'] | /pdf/b4188ed87d8238e926d1ecc665363ba8ac4330b3.pdf | interpretability and explainable AI | null | ['ICLR.cc/2026/Conference/Submission25050/Authors'] |
kByN4v0M3e | 25,049 | kByN4v0M3e | Recurrent Action Transformer with Memory | Transformers have become increasingly popular in offline reinforcement learning (RL) due to their ability to treat agent trajectories as sequences, reframing policy learning as a sequence modeling task. However, in partially observable environments (POMDPs), effective decision-making depends on retaining information ab... | The paper proposes Recurrent Action Transformer with Memory - a transformer model with recurrent memory and a procedure for training it for memory-intensive environments in an Offline RL setting. | ['RL', 'Offline RL', 'Memory', 'Transformers', 'POMDP'] | /pdf/f12dedfe8e5f78bdbcafe02b45269bd395a16e7d.pdf | reinforcement learning | /attachment/73f84c75ae7e38b617ea2aa219acb4de58b82d54.zip | ['ICLR.cc/2026/Conference/Submission25049/Authors'] |
M9DSMVEqrq | 25,048 | M9DSMVEqrq | Chemical Priors at Scale: Efficient Foundation Models without Big Corpora | We achieve competitive molecular property prediction using up to two orders of magnitude fewer pretraining molecules by replacing generic masked language modeling with chemically-informed, task-conditioned self-supervision. Our **C**hemicaly **I**nformed **L**anguage **T**ransformer (**CILT**) learns from 300+ programm... | null | ['Molecular language modeling', 'chemically-informed self-supervision', 'scientific foundation models'] | /pdf/329c0833732c091fb8b1376c3a2ed726dcafec3f.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission25048/Authors'] |
v0QOVSVPtq | 25,047 | v0QOVSVPtq | Exploring Diverse Generation Paths via Inference-time Stiefel Activation Steering | Language models often default to a narrow set of high-probability outputs, leaving their generation paths homogeneous and prone to mode collapse. Sampling-based strategies inject randomness but still struggle to guarantee diversity across multiple concurrent generation runs. We address this limitation by introducing ST... | null | ['activation steering', 'generation diversity', 'manifold opimization'] | /pdf/4eb957af1cb78f43d0cbbcd5abb382f237f5c400.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission25047/Authors'] |
KN2RD4fpnH | 25,046 | KN2RD4fpnH | Geometry of Nash Mirror Dynamics: Adaptive $\beta$-Control for Stable and Bias-Robust Self-Improving LLM Agents | Self‑improving agents learn by playing competitive, often non-transitive language games (e.g., generator–solver, proposer–verifier) where training can oscillate or drift toward undesirable behaviours. We study this scenario through the lens of reverse‑KL regularised Nash learning, showing how the regularisation strengt... | null | ['Large Language Models', 'Learning in Games'] | /pdf/e74141d9139731e72e9f048ce45bed1c0775760f.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25046/Authors'] |
kHqt0ZSbKT | 25,045 | kHqt0ZSbKT | Random Controlled Differential Equations | We introduce a training-efficient framework for time-series learning that combines random features with controlled differential equations (CDEs). In this approach, large randomly parameterized CDEs act as continuous-time reservoirs, mapping input paths to rich representations. Only a linear readout layer is trained, re... | null | ['random features', 'time-series', 'path signatures', 'CDEs', 'RDEs', 'reservoir computing', 'kernels'] | /pdf/c6f10a55b7e848fe113a2498139d97f18f6b691f.pdf | learning on time series and dynamical systems | null | ['ICLR.cc/2026/Conference/Submission25045/Authors'] |
wWkyL8D9xd | 25,044 | wWkyL8D9xd | FastFlow: Accelerating The Generative Flow Matching Models with Bandit Inference | Flow-matching models deliver state-of-the-art fidelity in image and video generation, but the inherent sequential denoising process renders them slower. Existing acceleration methods like distillation, trajectory truncation, and consistency approaches are static, require retraining, and often fail to generalize across ... | Adaptive inference method for accelerating flow matching based visual generation. | ['generative modelling', 'faster inference.'] | /pdf/7ba33f4a01b10cedfd0eb078d58d749ac2ca7924.pdf | generative models | null | ['ICLR.cc/2026/Conference/Submission25044/Authors'] |
Q3t0QBVFSG | 25,042 | Q3t0QBVFSG | Not Just a Flash in Time: Interpreting Long Event Streams through Language | Event cameras operate asynchronously with microsecond-level temporal precision and generate sparse event streams, enabling low-latency visual perception under high dynamic range conditions. However, current multimodal large language models (MLLMs) remain suboptimal when handling such data: they either fail to effective... | We uses spatiotemporal compression and two‑stage cross‑modal optimization to condense long event streams and we build a novel event–text dataset and multi‑task benchmark, boosting descriptive accuracy and semantic understanding. | ['event', 'multimodal learning', 'long sequence', 'language and vision'] | /pdf/d4be2f033e08306ffcebd24eb4d017c3b5370a99.pdf | applications to computer vision, audio, language, and other modalities | /attachment/d970829147853035c5ffaf6b2840905d570003b5.zip | ['ICLR.cc/2026/Conference/Submission25042/Authors'] |
JTnzojFUz7 | 25,040 | JTnzojFUz7 | Mask What Matters: Controllable Text-Guided Masking for Self-Supervised Medical Image Analysis | The scarcity of annotated data in specialized domains such as medical imaging presents significant challenges to training robust vision models. While self-supervised masked image modeling (MIM) offers a promising solution, existing approaches largely rely on random high-ratio masking, leading to inefficiency and poor s... | null | ['Medical Image Analysis', 'Self-Supervised Learning', 'Vision-Language Models'] | /pdf/64f4c681ae020807e699a88ea2cc85306159cb3b.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | /attachment/87144eebdebc650ed9a538b39faafe12d7c435ed.zip | ['ICLR.cc/2026/Conference/Submission25040/Authors'] |
Eaf5emUUd6 | 25,039 | Eaf5emUUd6 | Towards Understanding Feature Learning in Parameter Transfer | Parameter transfer is a central paradigm in transfer learning, enabling knowledge reuse across tasks and domains by sharing model parameters between upstream and downstream models. However, when only a subset of parameters from the upstream model is transferred to the downstream model, there remains a lack of theoretic... | null | ['Parameter transfer', 'feature learning theory', 'transfer learning', 'negative transfer'] | /pdf/bf06465af47d7a5fef3c7c4c8371dc94bc674103.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25039/Authors'] |
kTMrlXV1my | 25,038 | kTMrlXV1my | Style Decomposition and Content Preservation for Artistic Style Transfer | Artistic style transfer is a crucial task that aims to transfer the artistic style of a style image to a content image, generating a new image with preserved content and a distinct style. With the advancement of image generation methods, significant progress has been made in artistic style transfer. However, the existi... | null | ['Style Transfer', 'Decompsing', 'Diffusion'] | /pdf/9542a3238556d8ef077a48564b583d9cfd954c77.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25038/Authors'] |
e0qqNM7GtY | 25,037 | e0qqNM7GtY | A Theory of Training Parameter-Shared Quantum Neural Networks from a Bayesian Perspective | The objective function landscape of Quantum Neural Networks (QNNs) is both numerically and theoretically demonstrated to be highly non-convex, exhibiting numerous local optima. This raises an important question regarding the efficiency of training QNNs: can the optimization error systematically converge to a target thr... | We rigorously provide the network depth at which parameter-shared quantum neural networks can be trained efficiently, resolving a long-standing open question. | ['Quantum Neural Network', 'Trainability', 'Bayesian Optimization', 'Parameter-Shared', 'Random Matrix Theory'] | /pdf/52193524732380f300a8482ab378d2ce3162132d.pdf | optimization | /attachment/648f51ff0a5baab0a4ecb1026f921438fb5d03b9.pdf | ['ICLR.cc/2026/Conference/Submission25037/Authors'] |
B2Neq64sm6 | 25,036 | B2Neq64sm6 | Direct Advantage Estimation for Scalable and Sample-efficient Deep Reinforcement Learning | Direct Advantage Estimation (DAE) has been shown to improve the sample efficiency of deep reinforcement learning.
However, its reliance on full environment observability limits applicability in realistic settings.
In the present work, we (i) extend DAE to partially observable domains with minimal modifications, and (ii... | null | ['deep reinforcement learning', 'advantage estimation', 'arcade learning environment'] | /pdf/1f0e495161e9cae6d759f304564595956a54dc5a.pdf | reinforcement learning | /attachment/928848115ee172d80d03b2b51635a16413f948ea.zip | ['ICLR.cc/2026/Conference/Submission25036/Authors'] |
XQlcvkzMuv | 25,035 | XQlcvkzMuv | Split, Not Spilled: Practical Obfuscation-Based Privacy-Preserving Split Learning | Split Learning (SL) partitions a deep neural network between client and server, enabling collaborative training while reducing the client’s computational load. However, it has been shown that the intermediate activations (“smashed data”) of the client’s model, shared with the server, leak sensitive information. Existin... | null | ['Split Learning', 'Discrete Periodic Transform', 'Collaborative Framework'] | /pdf/ab2d19fef5a7a9b775c93f536b35421fb6223cd9.pdf | alignment, fairness, safety, privacy, and societal considerations | null | ['ICLR.cc/2026/Conference/Submission25035/Authors'] |
YkfhTzq3hL | 25,034 | YkfhTzq3hL | Hallucination Benchmark for Speech Foundation Models | Hallucinations in automatic speech recognition (ASR) systems refer to fluent and coherent transcriptions produced by neural ASR models that are completely unrelated to the underlying acoustic input (i.e., the speech signal). While similar to conventional decoding errors in potentially compromising the usability of tran... | This paper introduces a framework that categorizes ASR hallucinations into 4 categories, namely lexical, phonetic, morphological, and semantic hallucinations, to provide more detailed error analysis beyond standard WER. | ['Hallucination', 'Automatic Speech Recognition', 'SpeechLLM', 'Speech Foundation Model', 'Benchmark'] | /pdf/eff6b9578c8a7d76e0306e9b6ccfca58dc401dc3.pdf | applications to computer vision, audio, language, and other modalities | /attachment/be05564f42cfec91179eb82a0f1a144ea003a266.zip | ['ICLR.cc/2026/Conference/Submission25034/Authors'] |
zu18YgtWfK | 25,033 | zu18YgtWfK | Efficient Formulation and Quantum Optimization of Combinatorial Problems through Parametrized Hamiltonians | Combinatorial optimization problems (COPs) represent a promising application domain for quantum computing, yet current quantum optimization approaches treat each problem instance independently, requiring expensive re-optimization for every configuration. In this paper we propose a different paradigm inspired by quantum... | We introduce parameterized Hamiltonians as a framework for combinatorial optimization, enabling new problem types and efficient global optimization via QAOA with implicit differentiation. | ['Quantum optimization', 'Parameterized Hamiltonians', 'Combinatorial optimization', 'Implicit differentiation', 'Quantum Approximate Optimization Algorithm (QAOA)', 'Optimization', 'Physics-inspired machine learning', 'Software frameworks for quantum optimization'] | /pdf/368b55da76837aed33085015f0c7257887ac4373.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission25033/Authors'] |
kFHg8YIi2M | 25,031 | kFHg8YIi2M | Certifying Graph Neural Networks Against Label and Structure Poisoning | Robust machine learning for graph-structured data has made significant progress against test-time attacks, yet certified robustness to poisoning – where adversaries manipulate the training data – remains largely underexplored. For image data, state-of-the-art poisoning certificates rely on partitioning-and-aggregation ... | We make certifying robustness in graph learning against node-label and structure poisoning work. | ['graph learning', 'robustness', 'robustness certification', 'graph machine learning', 'poisoning', 'provable robustness', 'self-training', 'semi-supervised learning', 'graph neural networks'] | /pdf/70e778bf19ac4c5e01c81dc3fed70d8503fa2862.pdf | learning on graphs and other geometries & topologies | null | ['ICLR.cc/2026/Conference/Submission25031/Authors'] |
9cLPurIZMj | 25,030 | 9cLPurIZMj | Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning | Memory is crucial for enabling agents to tackle complex tasks with temporal and spatial dependencies. While many reinforcement learning (RL) algorithms incorporate memory, the field lacks a universal benchmark to assess an agent's memory capabilities across diverse scenarios. This gap is particularly evident in tableto... | A benchmark of 32 memory tasks for tabletop robotic manipulation, a benchmark to test the memory of an RL agent and classification of memory tasks in RL by type of memory usage | ['Memory', 'Benchmark', 'Robots', 'POMDP', 'RL'] | /pdf/56108186c8f3b0b6cfd9080baf5c9db77e5f287c.pdf | applications to robotics, autonomy, planning | /attachment/6d7a5c8d8e2608d8e613e3f3ed5b74c8f6c58e29.zip | ['ICLR.cc/2026/Conference/Submission25030/Authors'] |
JrZMFC6Jgo | 25,027 | JrZMFC6Jgo | THE BLACK–WHITE-BOX OPTIMIZATION NETWORK | We introduce a \textit{Black--White-Box Optimization Network} and its first instance, \textit{Tensor-Train Creator (TTC)}, which couples Ising-style solves, a factorization-machine surrogate, and tensor-train (PROTES) search. Typed couplings, lattice realignment, and warm starts cut oracle calls and time-to-target. On ... | In this paper, we introduce TTC - a derivative-free optimization framework that couples HOFM surrogates, Ising solvers, and Tensor-Train. | ['Derivative-free optimization', 'Combinatorial optimization', 'Higher-order energy', 'HUBO', 'QUBO', 'Higher-Order Factorization Machines (HOFM)', 'Ising seeding', 'Tensor-Train (TT)'] | /pdf/519521e3b849c26ffef02618bec007f1207bbee6.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission25027/Authors'] |
HfiNG4QCFs | 25,026 | HfiNG4QCFs | Asymmetric Effects of Self-Corrective Learning on Chain-of-Thought Reasoning for Efficient Policy Adaptation | Recent advances in language model (LM)-powered agents have demonstrated the potential to tackle complex embodied tasks by grounding the models’ commonsense world knowledge in the interactive physical environments in which the agents operate. However, these LM-based agents' adaptation to a stream of diverse tasks over t... | null | ['embodied agent', 'task adaptaton'] | /pdf/a14d24a6b85dad429d9b69e79458fe35273f3e86.pdf | applications to robotics, autonomy, planning | /attachment/a6ed70894f881c08db1a963b2bea73e44e40fb78.zip | ['ICLR.cc/2026/Conference/Submission25026/Authors'] |
ebbVFo9r4B | 25,025 | ebbVFo9r4B | Object-level self-distillation with bounding-box weak supervision improves vision pretraining | Self-distillation has become a central paradigm for pretraining vision transformers (ViTs). Existing approaches typically operate at the image level and assume that different augmentations of the same image preserve semantic content to be distilled. This premise breaks down in complex scenes with multiple objects with ... | We propose a weakly-supervised pretraining approach for vision foundation models that shifts the self-distillation granularity from whole images to individual objects. | ['object-centric learning', 'vision pretraining', 'weakly-supervised learning'] | /pdf/7d05fa7d57229263ed8f47758c1d1886ffe36648.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25025/Authors'] |
ijhhFHvWS6 | 25,024 | ijhhFHvWS6 | Lost in Real-World Scenarios: Concretization Disrupts LLM Logical Reasoning | Although large reasoning models have attracted significant attention, recent studies reveal that even minor variations in input formulation can lead to substantial inconsistencies in reasoning outcomes, underscoring their fragility in real-world scenarios. To systematically investigate this issue, we propose a concreti... | null | ['Large Language Models', 'Reasoning Robustness', 'Input Formulation', 'Logical Reasoning'] | /pdf/46f20600bdcefda4791216bfdb747705c8a6ce5f.pdf | foundation or frontier models, including LLMs | /attachment/71262cf908b5af022ca61e59e3dfebaf0796cca8.zip | ['ICLR.cc/2026/Conference/Submission25024/Authors'] |
n9m13pabbk | 25,021 | n9m13pabbk | FraIR: Fourier Recomposition Adapter for Image Restoration | Restoring high-quality images from degraded inputs is a core challenge in computer vision, especially under diverse or compound distortions. While large-scale all-in-one models offer strong performance, they are computationally expensive and poorly generalize to unseen degradations. Parameter-Efficient Transfer Learnin... | FraIR introduces a Fourier-domain, degradation-aware adapter for efficient transfer learning in image restoration, achieving state-of-the-art performance with minimal parameter overhead and zero inference cost. | ['fourier adapter', 'parameter-efficient transfer learning', 'image restoration', 'degradation-aware gating', 'spectral modulation'] | /pdf/f79d3229bcf905db3112c768e4bee65b3d2ec773.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25021/Authors'] |
TqG3g75Ni6 | 25,019 | TqG3g75Ni6 | Prototypical Knowledge Transfer for Multi-Scenario Recommendation with Optimal Transport | Modern APPs often need to provide personalized recommendations across various scenarios, such as the homepage, local page, and live stream on TikTok. User behaviors in these scenarios differ, resulting in diverse data distributions. To effectively handle varying distributions and enhance performance, current Multi-Scen... | null | ['Multi-Scenario Recommendation', 'Optimal Transport'] | /pdf/00a89dacb3c68220f55f2fdea1c80e062e154b51.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25019/Authors'] |
ubAlIOmDoy | 25,017 | ubAlIOmDoy | Finding the Thread: Context-Driven Incremental Compression for Multi-Turn Dialogue | Modern conversational agents condition on an ever-growing dialogue history at each turn, incurring redundant re-encoding and attention costs that grow with conversation length. To enhance the efficiency, naive truncation or summarization degrades fidelity, and existing context compressors lack mechanisms for cross-turn... | null | ['multi-turn dialogue', 'context compression'] | /pdf/21051c71061a2f7b76250673f52b2a917289cc5b.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25017/Authors'] |
lJKdOYFF5W | 25,014 | lJKdOYFF5W | Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation | The incorporation of memory into agents is essential for numerous tasks within the domain of Reinforcement Learning (RL). In particular, memory is paramount for tasks that require the use of past information, adaptation to novel environments, and improved sample efficiency. However, the term ``memory'' encompasses a wi... | A formal description of the memory types of RL agents and a methodology for conducting an experiment to test the memory. | ['RL', 'POMDP', 'Memory', 'Classification'] | /pdf/58c2952410f906b51ed5d2733c5c7a0efe11d332.pdf | reinforcement learning | /attachment/ce7aee9440596a38fc87925eb4e933b0fca3a5d6.zip | ['ICLR.cc/2026/Conference/Submission25014/Authors'] |
PNPF7W6s8n | 25,013 | PNPF7W6s8n | Active Learning for Molecular Conformation Optimization with a Domain-Agnostic Neural Surrogate Oracle | Molecular conformation optimization is crucial to computer-aided drug discovery and materials design, yet conventional force-based minimization with physics oracles (e.g., DFT) is prohibitively expensive.
Neural network potentials (NNPs) are capable of accelerating this process but typically require large quantum chemi... | We propose a data-efficient active learning framework for conformational energy minimization with neural network potentials and domain-agnostic trainable neural surrogate oracle | ['energy minimization', 'conformational optimization', 'geometry optimization', 'graph neural networks', 'neural network potentials', 'active learning'] | /pdf/22432587a7ab04611af71a639b659f90f1ef320b.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission25013/Authors'] |
Zyy2wbKd8h | 25,012 | Zyy2wbKd8h | VIPO-R1: Cultivating Video Reasoning in MLLMs via Verifier-Guided Iterative Policy Optimization | Applying Reinforcement Learning (RL) to Multimodal Large Language Models (MLLMs) shows significant promise for complex video reasoning. However, popular Reinforcement Fine-Tuning (RFT) methods, such as outcome-based Group Relative Policy Optimization (GRPO), are limited by data preparation bottlenecks (e.g., noise or h... | null | ['video understanding', 'video question answering'] | /pdf/6430a59edddcb9b8d53ba3e7b55fa2694a5f8e01.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25012/Authors'] |
zNT60EJgO9 | 25,011 | zNT60EJgO9 | ERS*: A Bounded, Attribution-Agnostic Metric for Explainable Robustness in Image Recognition | Deep vision models can remain accurate under perturbations while shifting their internal reasoning, which is risky for safety-critical use. We introduce ERS*, a bounded metric (in [0,1]) for explainable robustness that jointly scores (i) normalized performance degradation and (ii) explanation stability between clean an... | ERS* is a bounded, attribution-agnostic metric that combines performance degradation and explanation stability to expose when vision models and ensembles, stay accurate but reason inconsistently under real-world perturbations. | ['Explainable Robustness Score', 'attribution stability', 'saliency maps', 'Grad-CAM', 'EigenCAM', 'attention rollout', 'LRP', 'RISE', 'ensemble attribution', 'Vision Transformer', 'Swin Transformer', 'ResNet-50', 'traffic sign recognition', 'physical perturbations', 'natural corruptions', 'CIFAR-C', 'ImageNet-C', 'aut... | /pdf/a0349cf5eadf4f470ad01d86b9b1a74610045a38.pdf | interpretability and explainable AI | null | ['ICLR.cc/2026/Conference/Submission25011/Authors'] |
N8L7NEARq2 | 25,010 | N8L7NEARq2 | ContinualCropBank: Object-Level Replay for Semi-Supervised Online Continual Object Detection | Deep learning has achieved remarkable progress in object detection, but most advances rely on static, fully labeled datasets$\textemdash$an unrealistic assumption in dynamic, real-world environments. Continual Learning (CL) aims to overcome this limitation by enabling models to acquire new knowledge without forgetting ... | We address the problem of label-efficient online continual object detection by introducing ContinualCropBank, an object-level replay module that mitigates catastrophic forgetting while improving detection performance under limited supervision. | ['Continual Learning', 'Semi-Supervised Learning', 'Object Detection', 'Online Continual Learning'] | /pdf/70f04f3a5c972abf5d5db35a3bcf9b1345933320.pdf | transfer learning, meta learning, and lifelong learning | /attachment/fdbe81d76a56d78f36e5d05105d0c1d01d871774.zip | ['ICLR.cc/2026/Conference/Submission25010/Authors'] |
5WecBhuCyF | 25,006 | 5WecBhuCyF | Curing the Transitivity Curse: Shortcut Logical Reasoning via A Priori Knowledge Compilation | While large language models (LLMs) have shown remarkable reasoning abilities, they often fail at multi-hop logical reasoning tasks that require chaining inferences, struggling to deduce transitive relations like $(P \to R)$ from $(P \to Q) \land (Q \to R)$. This fundamental limitation, which we term the \textbf{``Trans... | Our work introduces a mechanism that performs A Priori Knowledge Compilation—proactively deriving foundational facts and composing powerful new rules—to enable robust Shortcut Reasoning and cure the Transitivity Curse in LLMs. | ['Logical Reasoning', 'Large Language Models', 'Knowledge Compilation'] | /pdf/8e3f771644e66caffb1439f4806c9d9c2f8d2f98.pdf | neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.) | null | ['ICLR.cc/2026/Conference/Submission25006/Authors'] |
USEpVtH8qV | 25,005 | USEpVtH8qV | JIONE: an Approach for Merging Large Language Models via Teacher–Student Prediction Refinement | Large Language Models (LLMs) demonstrated remarkable capabilities across reasoning, problem-solving, and natural language understanding tasks such as Text classification, Multiple-choice question answering. However, relying on a single LLM faces limitations, as models are typically specialized to particular domains or ... | This paper proposes an unconstrained model merging approach that accommodates both homogeneous and heterogeneous multiple LLMs. This is a teacher-student approach in which each query is processed by the student first and refined by the teacher. | ['Large Language Model', 'Large Languag Model Merging', 'Teacher-Student Approach', 'Prompt Engineering'] | /pdf/80f2354c7b591b128224dea126366ced537d5a5f.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25005/Authors'] |
rK7yOLa15z | 25,003 | rK7yOLa15z | SPECTRA: Spectral Target-Aware Graph Augmentation for Imbalanced Molecular Property Regression | Imbalanced regression is pervasive in molecular property prediction, where the most valuable compounds (e.g., high potency) occupy sparse regions of the label space. Standard Graph Neural Networks (GNNs) optimize average error and underperform on these rare but critical cases, while existing oversampling methods often ... | null | ['Imbalanced Learning', 'Imbalanced Regression', 'Graph-based Learning', 'Graph Representation Learning', 'Molecular Property Prediction'] | /pdf/f3fe71f8c61f0afda5f85d7065f36ef2efab211a.pdf | learning on graphs and other geometries & topologies | null | ['ICLR.cc/2026/Conference/Submission25003/Authors'] |
bm3rbtEMFj | 25,001 | bm3rbtEMFj | ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL | Real-world robotic agents must act under partial observability and long horizons, where key cues may appear long before they affect decision making. However, most modern approaches rely solely on instantaneous information, without incorporating insights from the past. Standard recurrent or transformer models struggle w... | ELMUR is a transformer model with layer-local external memory and LRU-based memory updates for long-horizon reasoning in POMDPs | ['RL', 'POMDP', 'Memory', 'Transformer', 'Robotics'] | /pdf/cfd2fdf517449e0e46ee727f55b775b0e2846745.pdf | reinforcement learning | /attachment/42c0b9b6dddf2f4dcff1141421581f361f5f5da6.zip | ['ICLR.cc/2026/Conference/Submission25001/Authors'] |
IJAPVmxQYU | 25,000 | IJAPVmxQYU | Improving Extreme Wind Prediction with Frequency-Informed Learning | Accurate prediction of extreme wind velocities has substantial significance in industry, particularly for the operation management of wind power plants. Although the state-of-the-art data-driven models perform well for general meteorological forecasting, they may exhibit large errors for extreme weather—for example, sy... | null | ['Extreme Weather Forecasting', 'Meteorological Analysis', 'AI for Science'] | /pdf/5412d4187049f4f647d2d332b2764ec331c54941.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission25000/Authors'] |
tzS9roOTdj | 24,998 | tzS9roOTdj | Reinforcement Learning Fine-Tuning Enhances Activation Intensity and Diversity in the Internal Circuitry of LLMs | Large language models (LLMs) acquire extensive prior knowledge through large-scale pretraining and can be further enhanced via supervised fine-tuning (SFT) or reinforcement learning (RL)–based post-training.
A growing body of evidence has shown that RL fine-tuning improves the capability of LLMs beyond what SFT alone a... | This work utilizes edge attribution patching (EAP) to investigate the internal differences of LLMs before and after RL fine-tuning, and uncovers that RL enhances activation intensity and diversity in the internal circuitry of LLMs. | ['Large Language Models; Reinforcement Learning Fine-Tuning; Edge Attribution Patching'] | /pdf/9c081f056bc96764ba5c55afbb00b09fadb6739f.pdf | interpretability and explainable AI | null | ['ICLR.cc/2026/Conference/Submission24998/Authors'] |
XFnrBCAmAQ | 24,996 | XFnrBCAmAQ | Differential Privacy of Hybrid Quantum-Classical Algorithms | Differential privacy has been successfully used to safeguard the privacy of classical algorithms and has more recently been extended to protect the privacy of quantum algorithms. However, in the present era of Noisy Intermediate-Scale Quantum (NISQ) computing, practical applications are
limited to hybrid quantum-classi... | null | ['Quantum differential privacy', 'hybrid quantum-classical algorithms', 'noise mechanism'] | /pdf/26688fc61aa244ba69c94feec27a7a68aeb0e10b.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/8cd414a22ed185601fc5fb1b79b6521644c02982.zip | ['ICLR.cc/2026/Conference/Submission24996/Authors'] |
IyV1QEc95F | 24,994 | IyV1QEc95F | Model-Aware Tokenizer Transfer | Large Language Models (LLMs) are trained to support an increasing number of languages, yet their predefined tokenizers remain a bottleneck for adapting models to lower-resource or distinct-script languages. Existing tokenizer transfer methods typically rely on semantic heuristics to initialize new embeddings, ignoring ... | This paper introduces Model-Aware Tokenizer Transfer, a method that leverages inter-token communication patterns in attention layers to efficiently adapt pretrained language models to new tokenizers and recover performance across diverse languages. | ['Large Language Models', 'Tokenizer transfer', 'Embedding initialization', 'Attention distillation', 'Model-aware adaptation', 'Multilingual NLP', 'Vocabulary adaptation', 'Low-resource languages', 'Mid-resource languages', 'Model-Aware Tokenizer Transfer', 'Attention Influence Modeling', 'Cross-Tokenizer Distillation... | /pdf/b47c846b9e3cb140ff026eb6dfcffe0937a423e9.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | /attachment/a50063a233ac8f00fb4b70a0e760a29affed9cda.zip | ['ICLR.cc/2026/Conference/Submission24994/Authors'] |
jVKhAfg0LS | 24,993 | jVKhAfg0LS | Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features | Medical hyperspectral imaging (HSI) represents a transformative innovation in diagnosing diseases and planning treatments by capturing detailed spectral and spatial features of tissues. However, the integration of deep learning into medical HSI classification has unveiled critical vulnerabilities to adversarial attacks... | null | ['medical hyperspectral', 'adversarial attack', 'spectral-spatial dependencie', 'multiscale features'] | /pdf/1bcd63b2d0180bd22985f592f2fae55e79f04754.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/2e2a81b0195e7b50f576bb6e744f113dcd14babb.zip | ['ICLR.cc/2026/Conference/Submission24993/Authors'] |
JxmjzC6syB | 24,989 | JxmjzC6syB | Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks | The ability to train Deep Neural Networks (DNNs) with constraints is instrumental in improving the fairness of modern machine-learning models. Many algorithms have been analysed in recent years, and yet there is no standard, widely accepted method for the constrained training of DNNs. In this paper, we provide a challe... | We provide a benchmark for comparing stochastic approximation algorithms, based on real-world fairness-constrained learning problems. | ['Fair Machine Learning', 'stochastic approximation', 'Augmented Lagrangian', 'Sequential Quadratic Programming', 'benchmarking'] | /pdf/eebea101b0a319bf45b9eecb82342596c42a9d06.pdf | datasets and benchmarks | /attachment/393f20d1e0652c87966d7409b40825a424783f54.zip | ['ICLR.cc/2026/Conference/Submission24989/Authors'] |
UESTP6dR1K | 24,986 | UESTP6dR1K | Automated Stateful Specialization for Adaptive Agent Systems | Current automated agent design frameworks produce either static workflows that lack adaptability or per-query optimizers that prevent the accumulation of deep, agent-level task expertise. We propose a new direction that reconciles these paradigms: creating stateful teams of specialist agents that accumulate knowledge o... | We introduce a framework that creates persistent, specialist agent teams through an offline lifecycle of discovery and cultivation, and deploys them with an online policy that efficiently adapts the team's structure for novel tasks. | ['LLMs', 'Autonomous Agents', 'Agent Specialization'] | /pdf/e80582ce468d83036273dd5b4ebdc6bd3decc715.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24986/Authors'] |
yuRO2wZ8su | 24,983 | yuRO2wZ8su | Cross-Lingual Data Scaling for Large Language Models | Large language models (LLMs) achieve consistent performance gains through data scaling, yet low-resource languages remain limited by small and stagnant dataset sizes.
To address this limitation, we introduce cross-lingual data scaling, where performance in low-resource languages scales with the dataset size of high-res... | Scaling low-resource language performance with high-resource language data | ['cross-lingual pretraining', 'data scaling', 'low-resource languages'] | /pdf/0c80be481b36da3c5efe475ebf2abee6ae30f04c.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission24983/Authors'] |
Uh0F0079Lh | 24,982 | Uh0F0079Lh | A Concept Level Energy-Based Framework for Interpreting Black-Box Large Language Model Responses | The widespread adoption of proprietary Large Language Models (LLMs) accessed strictly through closed-access APIs has created a critical challenge for their reliable deployment: a fundamental lack of interpretability. In this work, we propose a model-agnostic, post-hoc interpretation framework to address this. Our appro... | We propose a framework for training a model-agnostic interpreter that identifies influential prompt components for black-box LLM responses by leveraging a global energy-based training objective. | ['Black-box large language models', 'Post-hoc interpretation', 'Energy based models', 'Model-agnostic feature attribution'] | /pdf/8fe1cd0809cb3fc900af2d9f036c3bb1c6025418.pdf | interpretability and explainable AI | /attachment/0c4d9fac4a52f639267b5d3384ffeafa76fafed6.zip | ['ICLR.cc/2026/Conference/Submission24982/Authors'] |
PSW2bVPkVf | 24,981 | PSW2bVPkVf | Probing Memes in LLMs: A Paradigm for the Entangled Evaluation World | Current evaluations of large language models (LLMs) often treat datasets and models in isolation, obscuring phenomena that only emerge from their collective interaction. Items in datasets are reduced to labeled entries, disregarding the multidimensional properties they reveal when examined across model populations. Mod... | null | ['Meme', 'Large Language Model', 'Evaluation', 'Probe', 'Paradigm'] | /pdf/1aa51d3d5b553be1334da2c4a510cb2bd37169c1.pdf | datasets and benchmarks | /attachment/d0f4fc32a792ea8781b6f00a1938c800bbc7b9e2.zip | ['ICLR.cc/2026/Conference/Submission24981/Authors'] |
Ue6QMEDTRV | 24,980 | Ue6QMEDTRV | ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability | Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions, such as undermining student's academic dignity. LLM text detection thus needs to ensure the interpretability of the decision, which can help users judge how reliably correct its prediction is. When humans ve... | null | ['Machine-generated Text Detection', 'Human Interpretability'] | /pdf/b046bb70057c47e1abe44be54b0ef3eb6149f760.pdf | alignment, fairness, safety, privacy, and societal considerations | null | ['ICLR.cc/2026/Conference/Submission24980/Authors'] |
hSpA4DAoMk | 24,978 | hSpA4DAoMk | Adaptive Methods Are Preferable in High Privacy Settings: An SDE Perspective | Differential Privacy (DP) is becoming central to large-scale training as privacy regulations tighten. We revisit how DP noise interacts with *adaptivity* in optimization through the lens of *stochastic differential equations*, providing the first SDE-based analysis of private optimizers. Focusing on DP-SGD and DP-SignS... | With SDEs, we show that while DP-SignSGD is better under tight privacy or noisy batches, DP-SGD is better otherwise, and adaptivity needs far less hyperparameter tuning across privacy levels. | ['Stochastic Differential Equations', 'Differential Privacy'] | /pdf/401371433d65ebc4f06fe84432b73962d4e25d36.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission24978/Authors'] |
Q8HRE2E5wp | 24,977 | Q8HRE2E5wp | Can Recommender Systems Teach Themselves? A Recursive Self-Improving Framework with Fidelity Control | The scarcity of high-quality training data presents a fundamental bottleneck to scaling machine learning models. This challenge is particularly acute in recommendation systems, where extreme sparsity in user interactions leads to rugged optimization landscapes and poor generalization. We propose the Recursive Self-Impr... | null | ['Self-improving; Recommendation System; Data Generation'] | /pdf/3cf9423a98df87df8aeb6580e53c3fa60427fc47.pdf | generative models | null | ['ICLR.cc/2026/Conference/Submission24977/Authors'] |
J8lWv7WOZ5 | 24,975 | J8lWv7WOZ5 | Dyna-ViT: Parameter-Free Dynamic Token Pruning for Efficient Vision Transformers | Vision Transformers (ViTs) achieve state-of-the-art results, yet their quadratic self-attention is inefficient, largely due to redundant processing of low-information background patches. We introduce Dyna-ViT, a simple, parameter-free framework for dynamic token pruning that ranks patches with an unsupervised saliency ... | Dyna-ViT prunes tokens before the encoder using a parameter-free saliency score (top-K patches), keeping a standard ViT backbone while delivering ~20–28% faster training with matched or better accuracy on VOC, CIFAR-100, and Tiny-ImageNet. | ['Vision Transformers (ViT)', 'dynamic token pruning', 'parameter-free saliency', 'sparse token selection', 'efficient attention', 'analytic FLOPs', 'PASCAL VOC', 'CIFAR-100', 'Tiny-ImageNet', 'LIME explainability', 'DynamicViT', 'ToMe'] | /pdf/5c464d97cbb834bdc44d406dd0bb0393277ef424.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission24975/Authors'] |
IezZyvgdO3 | 24,972 | IezZyvgdO3 | SAGE: Fast, Generalizable and Photorealistic 3D Human Reconstruction from a Single Image | In this paper, we present SAGE, a Large Human Reconstruction Model, that can produce a photorealistic 3D reconstruction of a human from a single image in less than 1 second. To support scalable model training, we first design an effective data generation pipeline to alleviate the shortage of available photorealistic 3D... | We propose a Large Human Reconstruction Model, which can produce a photorealistic 3D reconstruction of a human from a single image in less than 1 second. | ['3D Human Reconstruction; Single Image; Large Human Reconstruction Model'] | /pdf/94a136a9310edac19416728a1d617ca33d65e5a0.pdf | applications to computer vision, audio, language, and other modalities | /attachment/86c2101bf226053f7e7ae7427adfdfcdee21c0da.zip | ['ICLR.cc/2026/Conference/Submission24972/Authors'] |
XFY7kvIFSw | 24,971 | XFY7kvIFSw | MediX-R1: Open Ended Medical Reinforcement Learning | We introduce MediX-R1, an open-ended reinforcement learning (RL) framework for medical multimodal large language models (MLLMs) that enables clinically grounded, free-form answers beyond multiple-choice formats. MediX-R1 fine-tunes a baseline vision–language backbone with Group Relative Policy Optimization (GRPO) and a... | We introduce MediX-R1, an open-ended RL framework that equips medical multimodal LLMs with clinically grounded reasoning and evaluation for reliable free-form answers beyond multiple-choice tasks. | ['Medical MLLMs', 'Reinforcement Learning', 'GRPO', 'Open-ended Reward design', 'Semantic evaluation', 'Open-ended medical reasoning'] | /pdf/5d2be061940e195a32ca4927b344b221f03b15fe.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission24971/Authors'] |
59OJOgKLzN | 24,970 | 59OJOgKLzN | Rethinking the High-Throughput LLM Inference: An Opportunity for Speculative Decoding | Speculative decoding is a widely adopted method for accelerating autoregressive generation by drafting multiple candidate tokens and verifying them jointly with the target model. While effective in small-batch settings, it has been considered impractical under large-batch inference due to the belief that such regimes a... | null | ['Speculative Decoding', 'Large Language Models', 'High-Throughput Inference'] | /pdf/cc38068eb4474fd3795bdd899efb124f7a5c204c.pdf | foundation or frontier models, including LLMs | /attachment/5dc38b8a0bdb66d17c28b0312572276ad583868c.zip | ['ICLR.cc/2026/Conference/Submission24970/Authors'] |
y6je0oiwEg | 24,969 | y6je0oiwEg | Datatype tagging and prompt alignment: a recipe for boosting LLMs on algorithmic tasks | This paper contributes toward strengthening the bridge between LLMs as programmers and classical ideas in programming languages (PL). Specifically, we show that aligning prompts with *typed programs* enables even small models to reliably emit one-line Python code. We present a simple yet effective recipe consisting of ... | We describe a compact recipe that aligns prompts with a typed program space and reliably emits a single legal Python expression. This helps LLMs align with algorithmic intents more easily and provides a quickfix boost to their algorithmic abilities | ['tokenizers', 'datatype tagging', 'algorithmic alignment', 'LLMs and coding', 'LLMs and arithmetic', 'algebra'] | /pdf/e602014cb454e6dbbbe1e10286eff0c34b3d6e28.pdf | foundation or frontier models, including LLMs | /attachment/629b909f2a7011fc619b8e430ada97509e8df6b5.zip | ['ICLR.cc/2026/Conference/Submission24969/Authors'] |
FVmWGMIoES | 24,966 | FVmWGMIoES | Restoring Trust in Medical LLMs: GNN-Powered Knowledge Graph Reconstruction for Robust Defense | Medical large language models (LLMs) have demonstrated remarkable capabilities in clinical decision support and biomedical question-answering, yet they remain highly vulnerable to adversarial threats such as prompt injection, data poisoning, and parameter tampering. As reported in Nature Medicine (2025), existing defen... | null | ['Medical large language models', 'Robust Defense', 'Data-poisoning Attacks'] | /pdf/25d93b0a279f6b2a757b0b279013792e18d027c9.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/73ff035edb9d8783bb56759739fee8f44a7971b9.zip | ['ICLR.cc/2026/Conference/Submission24966/Authors'] |
pW7ORPqwzG | 24,965 | pW7ORPqwzG | Epidemiology of Large Language Models: A Benchmark for Observational Distribution Knowledge | Artificial intelligence (AI) systems hold great promise for advancing various scientific disciplines, and are increasingly used in real-world applications. Despite their remarkable progress, further capabilities are expected in order to achieve more general types of intelligence. A critical distinction in this context ... | null | ['Large Language Models', 'Probabilistic Reasoning'] | /pdf/b9e2734cf7e8c22a9a6363754d8f3b89d84c88bb.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission24965/Authors'] |
O4rR59WKHL | 24,962 | O4rR59WKHL | Synthesizing Feature Extractors: An Agentic Approach for Algorithm Selection | Feature engineering remains a critical bottleneck in machine learning, often requiring significant manual effort and domain expertise. While end-to-end deep learning models can automate this process by learning latent representations, they do so at the cost of interpretability. We propose a gray-box paradigm for automa... | null | ['constraint solving', 'algorithm selection', 'LLM', 'combinatorial optimization', 'feature extraction'] | /pdf/f7870dd10c490650b3c9fe1dfaf26dabcf9b36ec.pdf | optimization | /attachment/856a135e1ac07553f9025d18b5185342d8ddd0cf.zip | ['ICLR.cc/2026/Conference/Submission24962/Authors'] |
VqnBaeu43F | 24,961 | VqnBaeu43F | From Parameters to Behaviors: Unsupervised Compression of the Policy Space | Despite its recent successes, Deep Reinforcement Learning (DRL) is notoriously sample-inefficient. We argue that this inefficiency stems from the standard practice of optimizing policies directly in the high-dimensional and highly redundant parameter space $\\Theta$. This challenge is greatly compounded in multi-task s... | null | ['reinforcement learning', 'unsupervised reinforcement learning', 'unsupervised representation learning'] | /pdf/3870585c897ac82ab25224802a2d18e5449c09f4.pdf | reinforcement learning | /attachment/12481342ecb228e3021fba6a496cff2e5b535643.zip | ['ICLR.cc/2026/Conference/Submission24961/Authors'] |
JTK6nljnag | 24,960 | JTK6nljnag | Scent of Health (S-O-H): Olfactory Multivariate Time-Series Dataset for Non-Invasive Disease Screening | Exhaled breath analysis has become an advantageous alternative to traditional medical diagnostic methods. Electronic nose (eNose) sensors can enable low-cost, non-invasive disease screening from exhaled breath. Still, progress is limited by small, site-specific datasets and sensor-specific temporal artifacts (e.g., bas... | A multivariate dataset from an Enose sensor for non-invasive disease screening with data from over 1000 unique patients. | ['enose', 'dataset', 'medicine', 'olfactory'] | /pdf/ddf85d4b3d5fec94c5d1951b426d6f775aa9a102.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission24960/Authors'] |
hnItP9g9Bf | 24,959 | hnItP9g9Bf | A Strategy-Agnostic Framework for Partial Participation in Federated Learning | Partial participation (PP) is a fundamental paradigm in federated learning, where only a fraction of clients can be involved in each communication round. In recent years, a wide range of mechanisms for partial participation have been proposed. However, the effectiveness of a particular technique strongly depends on pro... | null | ['Partial participation', 'Stochastic optimization', 'Convex optimization', 'Non-convex optimization'] | /pdf/86de3e045752aa2ec3e3565da8f1c1ed1d36b766.pdf | optimization | /attachment/f97f2fbe86053a5466704630ceaef032a8b2f918.zip | ['ICLR.cc/2026/Conference/Submission24959/Authors'] |
HDZ2GBwrWo | 24,957 | HDZ2GBwrWo | MoEsturizer: Resource-Efficient MoE Upcycling for Small Language Models | Large language models (LLMs) are typically scaled through billions of parameters and trillions of tokens, making progress largely restricted to organizations with substantial resources. Recent work on Mixture-of-Experts (MoE) upcycling shows that dense pretrained models can be transformed into sparse MoE variants, but ... | 150k samples, one 96GB GPU: upcycling small LMs to sparse MoEs (Experts-Top K: 4-2/8-2) beats dense bases on 9 benchmarks and rivals larger tiers at far lower active parameters; depth scaling or higher top-k adds little. | ['Mixture-of-Experts (MoE)', 'Model upcycling', 'Small language models (SLMs)', 'Resource-constrained training'] | /pdf/f9e30f4befe28c3d17ab2a198c88d423e4928494.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission24957/Authors'] |
5v0p0Bmp6S | 24,956 | 5v0p0Bmp6S | Learning from Examples and Self-Exploration: A New Paradigm for Dynamic Fusion | Alignment of Large Language Models with human preferences is dominated by two paradigms: Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), exemplified by methods like Group Relative Policy Optimization. Yet, they face a trade-off challenge: SFT excels at incorporating external knowledge but often fails to f... | null | ['Supervised Fine-Tuning', 'Large Language Models', 'Reinforcement Learning', 'Mathematical Reasoning', 'Dynamic Fusion'] | /pdf/397a5f0e7886a8a057053df12ac9fddc52eaeb89.pdf | reinforcement learning | null | ['ICLR.cc/2026/Conference/Submission24956/Authors'] |
YeWsA0VFZ5 | 24,955 | YeWsA0VFZ5 | LoCoT2V-Bench: A Benchmark for Long-Form and Complex Text-to-Video Generation | Recently text-to-video generation has made impressive progress in producing short, high-quality clips, but evaluating long-form outputs remains a major challenge especially when processing complex prompts. Existing benchmarks mostly rely on simplified prompts and focus on low-level metrics, overlooking fine-grained ali... | LoCoT2V-Bench is a new benchmark for long-form text-to-video generation that uses complex prompts and multi-dimensional metrics. | ['Video Generation Benchmark; Text-to-Video Generation; Long-Form Video Evaluation; Multi-Dimensional Assessment'] | /pdf/5ed408d1f09b5545f3a1a31519d18c7d134cfc20.pdf | datasets and benchmarks | /attachment/d4c5f787a3635c38c0e9337d5b475c0441f266a8.zip | ['ICLR.cc/2026/Conference/Submission24955/Authors'] |
XRf2Uscsa4 | 24,954 | XRf2Uscsa4 | Dual-Path Inertial Odometry with Temporal Attention | We present a dual-path inertial odometry framework that processes the IMU stream through two parallel branches. One branch works directly on raw measurements to preserve high-frequency transients, while the other applies a Savitzky–Golay filter to enforce smoother, Newton-consistent motion and reduce drift. The outputs... | Dual-path IMU odometry fusing raw and SG-filtered signals via temporal attention cuts RONIN error by 10%, improving robustness to devices, sampling rates and backbones (ResNet, TCN, LSTM) with minimal overhead. | ['Dual-path IMU odometry; Temporal attention fusion; Cross-backbone improvement'] | /pdf/efe321146770e37e5bfa3bb54ec88d8b62010613.pdf | learning on time series and dynamical systems | null | ['ICLR.cc/2026/Conference/Submission24954/Authors'] |
3qiCnLf3jf | 24,953 | 3qiCnLf3jf | Best-of-Infinity: Asymptotic Performance of Test-Time Compute | We study best-of-$N$ for large language models (LLMs) where the selection is based on majority voting. In particular, we analyze the limit $N \to \infty$, which we denote as best-of-$\infty$. While this approach achieves impressive performance in the limit, it requires an infinite test-time budget. To address this, we ... | null | ['LLM', 'test-time compute', 'majority voting', 'LLM ensemble'] | /pdf/16ef754fa1d42d4c12ee2c055332802c24e9e87b.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24953/Authors'] |
pWjehHNGg3 | 24,951 | pWjehHNGg3 | Machine Text Detectors are Membership Inference Attacks | Although membership inference attacks (MIAs) and machine-generated text detection target different goals, identifying training samples and synthetic texts, their methods often exploit similar signals based on a language model’s probability distribution. Despite this shared methodological foundation, the two tasks have ... | null | ['Membership Inference Attack', 'Machine-generated Text Detection'] | /pdf/a6864c7b78666b0e6091d5f0ad378efe7a16116d.pdf | alignment, fairness, safety, privacy, and societal considerations | null | ['ICLR.cc/2026/Conference/Submission24951/Authors'] |
3q3LnQ63Az | 24,950 | 3q3LnQ63Az | Towards Fine-grained Audio Captioning with Multimodal Contextual Fusion | High-quality, large-scale audio captioning is crucial for advancing audio understanding, yet current automated methods often generate captions that lack fine-grained detail and contextual accuracy, primarily due to their reliance on limited unimodal or superficial multimodal information. Drawing inspiration from human ... | null | ['Fine-grained Audio Caption Dataset', 'Large Audio Language Models'] | /pdf/c07a0b06e9c11859cb6dec4a8c43cf44ea2d7603.pdf | datasets and benchmarks | /attachment/f7575694c9eb18c24adc3273e37551d9f0a8c69b.zip | ['ICLR.cc/2026/Conference/Submission24950/Authors'] |
23AHaRy1QO | 24,949 | 23AHaRy1QO | Efficient Fine-tuning with Decomposed Foundation Model | Fine-tuning billion-scale large language models (LLMs) is challenging due to the extremely large model size, particularly in memory-constrained scenarios, even with parameter-efficient fine-tuning (PEFT) and quantization. To address this challenge, we propose a novel method based on the decomposition then fine-tuning (... | null | ['Large Language Model Fine-tuning', 'Foundation Model Decomposition'] | /pdf/312d561d8dcd358407bac5a34453bdc16904d950.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24949/Authors'] |
B7r8ZkBk4F | 24,948 | B7r8ZkBk4F | Two-Way Garment Transfer: Unified Diffusion Framework for Dressing and Undressing Synthesis | While recent advances in virtual try-on (VTON) have achieved realistic garment transfer to human subjects, its inverse task, virtual try-off (VTOFF), which aims to reconstruct canonical garment templates from dressed humans, remains critically underexplored and lacks systematic investigation. Existing works predominant... | This work propose the first unified framework for joint clothing-centric image synthesis that simultaneously resolves both mask-guided virtual try-on and mask-free virtual try-off. Extensive experiments validate the effectiveness of the model. | ['Diffusion Models', 'Image Generation', 'Virtual Try-On', 'Virtual Try-off'] | /pdf/df1eb507510f6946be38c7286ac71af9d4d1f215.pdf | applications to computer vision, audio, language, and other modalities | /attachment/d40b21d45766621f8b00d8d68526c70eddedc04e.zip | ['ICLR.cc/2026/Conference/Submission24948/Authors'] |
CvICPoKwRf | 24,946 | CvICPoKwRf | GENATATORs: ab initio Gene Annotation With DNA Language Models | Inference of gene structure and location from genome sequences - known as de novo gene annotation - is a fundamental task in biological research. However, sequence grammar encoding gene structure is complex and poorly understood, often requiring costly transcriptomic data for accurate gene annotation. In this work, we ... | null | ['DNA language models', 'genome annotation', 'ab initio', 'long sequence processing', 'recurrent models', 'state space models', 'computational genomics'] | /pdf/70bdbcef1093218113ae466bf00a28ea9c4b973a.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | /attachment/e78e0be3d055f09774390622f54cac488e259607.zip | ['ICLR.cc/2026/Conference/Submission24946/Authors'] |
4YBRDJ5TN3 | 24,945 | 4YBRDJ5TN3 | Exploring Redundancy and Shared Representations for Transformer Models Optimization | Large Language Models (LLMs) deliver state-of-the-art performance but at the cost of extreme computational and energy demands, raising the question of how much of their capacity is truly necessary. This paper explores structural and weight redundancies in Transformer-based architectures, aiming to identify inefficienci... | null | ['Large Language Models', 'Redundancy', 'Weight sharing', 'Model compression', 'Low-rank approximation'] | /pdf/76e4ac384af79d8da464c98e5690cceb91ffd45a.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24945/Authors'] |
UWGe5PDwjk | 24,944 | UWGe5PDwjk | Decomposing Visual Classification: Assessing Tree-Based Reasoning in VLMs | Vision language models (VLMs) excel at zero-shot visual classification, but their performance on fine-grained tasks and large hierarchical label spaces is understudied. This paper investigates whether structured, tree-based reasoning can enhance VLM performance. We introduce a framework that decomposes classification i... | More structure ≠ better performance. Sometimes the simplest approach (zero-shot prompting) is genuinely superior, and added complexity just creates more opportunities for failure. | ['Vision-Language Models', 'Hierarchical Classification', 'Sensitivity Analysis'] | /pdf/960754ea2115e25ef5d921b6c6bb4e257779255e.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission24944/Authors'] |
xFdT63wm5e | 24,943 | xFdT63wm5e | Unified Continuous Generative Models for Denoising-based Diffusion | Recent advances in continuous generative models, encompassing multi-step processes such as diffusion and flow matching (typically requiring $8$-$1000$ steps) and few-step methods such as consistency models (typically $1$-$8$ steps), have yielded impressive generative performance.
However, existing work often treats... | null | ['generative modeling', 'denoising diffusion', 'consistency model', 'image generation'] | /pdf/be5df6cd8475f363a14a0f34a1f6d89629985e6d.pdf | generative models | /attachment/9e42aa3c5ead91dd0b69435d56f490167dc541b7.zip | ['ICLR.cc/2026/Conference/Submission24943/Authors'] |
4H8xZA4zuj | 24,942 | 4H8xZA4zuj | On the Interaction of Batch Noise, Adaptivity, and Compression, under $(L_0,L_1)$-Smoothness: An SDE Approach | Understanding the dynamics of distributed stochastic optimization requires accounting for several major factors that affect convergence, such as gradient noise, communication compression, and the use of adaptive update rules. While each factor has been studied in isolation, their joint effect under realistic assumption... | We develop an SDE-based framework for DCSGD and DSignSGD, showing DCSGD needs noise- and compression-dependent normalization for stability, while DSignSGD remains robust and convergent even under heavy-tailed noise. | ['Stochastic Differential Equations', '$(L_0', 'L_1)$-Smoothness', 'Distributed Learning', 'Adaptivity'] | /pdf/7097654e3053818b66e4550423e0ffb1af0d5d1d.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission24942/Authors'] |
pnw3FGpqzF | 24,941 | pnw3FGpqzF | Knowledge-Enhanced Tabular Data Generation | Tabular data generation methods aim to synthesize artificial samples by learning the distribution of training data.
However, most existing tabular data generation methods are purely data-driven.
They perform poorly when the training samples are insufficient or when there exists a distribution shift between training a... | null | ['Tabular data generation'] | /pdf/9daa4802f466c09f5273a48aac19d663258053bf.pdf | generative models | null | ['ICLR.cc/2026/Conference/Submission24941/Authors'] |
QnHENtIAKL | 24,939 | QnHENtIAKL | Adaptive kernel selection for Stein Variational Gradient Descent | A central challenge in Bayesian inference is efficiently approximating posterior distributions. Stein Variational Gradient Descent (SVGD) is a popular variational inference method which transports a set of particles to approximate a target distribution. The SVGD dynamics are governed by a reproducing kernel Hilbert spa... | null | ['adaptive kernel selection', 'Stein Variational Gradient Descent', 'kernelized Stein discrepancy'] | /pdf/031d520ccbd83369b192af1bde8cfec036b49b05.pdf | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | /attachment/78873d6fec904d88f3bb7e01852288740d924873.zip | ['ICLR.cc/2026/Conference/Submission24939/Authors'] |
crKJJ4Ej60 | 24,938 | crKJJ4Ej60 | Copy-Paste to Mitigate Large Language Model Hallucinations | While Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to generate contextually grounded responses, contextual faithfulness remains challenging as LLMs may not consistently trust provided context, leading to hallucinations that undermine reliability. We observe an inverse correlation between re... | We propose CopyPasteLLM that trains models to simply copy from context, achieving 12.2-24.5% accuracy improvements with only 365 training samples (1/50th of baseline) and revealing how copy-paste recalibrates parameteric knowledge. | ['RAG Hallucination', 'Contextual Faithfulness', 'Model Interpretability', 'Large Language Model', 'Knowledge Conflict'] | /pdf/942c621d35e45ff23d09823ec1f1015dc180dcef.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24938/Authors'] |
1qLZsyJN2t | 24,937 | 1qLZsyJN2t | The Information Game: Active Inference as Bilevel Optimization and a Game-Theoretic Benchmark for LLM Inquiry | Large language models (LLMs) increasingly operate in settings where they must gather information rather than simply recall facts. We model this task as a multi-street game of incomplete information casting each round of information gathering as a bilevel optimization: an inner variational Bayesian step that updates be... | We frame question answering as bilevel optimization and use that to benchmark frontier LLMs on their efficiency at reducing uncertainty through question asking; we find these LLMs still lag an information-theoretic oracle | ['active inference', 'bilevel optimization', 'question asking', 'query optimality', 'inference', 'LLMs'] | /pdf/d906aedf92345936f636241eb400a1db1efdf48d.pdf | foundation or frontier models, including LLMs | /attachment/eb639218e64dadd32da00fed52d513b28b430fd5.zip | ['ICLR.cc/2026/Conference/Submission24937/Authors'] |
S8bmkHXqgT | 24,934 | S8bmkHXqgT | Interpretable Preference Elicitation: Aligning User Intent with Controllable Long-tailed Learning | Long-tailed recognition remains a significant challenge, where models often struggle with tail class performance and adaptability to diverse user preferences. While recent controllable paradigms leveraging hypernetworks allow numerical specification of head-tail trade-offs, defining these multi-dimensional preference v... | null | ['Long-tail learning'] | /pdf/e6a44a2afe03f21badbbada67bd8967aa184aa9c.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | /attachment/14af7c0b5b46143aa2f31aa87f65fa65d9f7cb5a.zip | ['ICLR.cc/2026/Conference/Submission24934/Authors'] |
CPajDOuA3h | 24,933 | CPajDOuA3h | RL-Obfuscation: Can Language Models Learn to Evade Latent-Space Monitors? | Latent-space monitors aim to detect undesirable behaviours in Large Language Models by leveraging their internal representations rather than relying solely on black-box outputs. These methods have shown promise in identifying behaviours such as deception and unsafe completions. However, these monitors may themselves be... | null | ['Probes', 'AI Safety', 'model internals', 'capability evaluation', 'interpretability', 'whitebox control'] | /pdf/42c58c75076c57e37df3dd81b619eca7e58d42c9.pdf | alignment, fairness, safety, privacy, and societal considerations | null | ['ICLR.cc/2026/Conference/Submission24933/Authors'] |
3FOfBcEEy1 | 24,931 | 3FOfBcEEy1 | Action-Conditioned Transformers for Decentralized Multi-Agent World Models | Multi-agent reinforcement learning (MARL) has achieved strong results on large-scale decision making, yet most methods are model-free, limiting sample efficiency and stability under non-stationary teammates. Model-based reinforcement learning (MBRL) can reduce data usage, but planning and search scale poorly with joint... | A decentralized transformer world model for multi-agent RL that couples Perceiver global context with action-conditioned contrastive prediction, yielding coherent long-horizon rollouts and stronger teammate coordination. | ['Multi-Agent Reinforcement Learning', 'Reinforcement Learning', 'Contrastive Learning', 'World Model'] | /pdf/0bc3851bf4aa3d658dcd44a2ea51ca0fe4fbe038.pdf | reinforcement learning | /attachment/82710fce9d3fc1704abc1dc8c6904f765bd3a0ab.zip | ['ICLR.cc/2026/Conference/Submission24931/Authors'] |
bqaClExo4A | 24,929 | bqaClExo4A | From Guanyin, UFOs to Paradise: Capturing Cultural Variation in Dream Interpretation | Humans have long sought to uncover the mystery of dreams from divine signs for predicting fortune and future, to psychology framing them as reflections of the subconscious. This curiosity extends to large language models (LLMs), where commercial LLMs e.g., OpenAI and DeepSeek exhibit preliminary dream interpretation ab... | null | ['bilingual dream interpretation', 'cross-cultural alignment'] | /pdf/c7da198d015236c4649293205eb5621d486b2c9a.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission24929/Authors'] |
gCkRzjVT7m | 24,927 | gCkRzjVT7m | Parameter-Efficient Fine-Tuning of LLMs with Mixture of Space Experts | Large language models (LLMs) have achieved remarkable progress, with Parameter-Efficient Fine-Tuning (PEFT) emerging as a key technique for downstream task adaptation. However, existing PEFT methods mainly operate in Euclidean space, fundamentally limiting their capacity to capture complex geometric structures inherent... | null | ['Large Language Models', 'Non-Euclidean Space', 'Parameter-Efficient Fine-tuning', 'Mixture of Experts'] | /pdf/645bceaf9c34fcd107fdb894814bf44cffb4e344.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24927/Authors'] |
yTzFfHOyyU | 24,926 | yTzFfHOyyU | Noise-Guided Transport for Imitation Learning | We consider imitation learning in the low-data regime, where only a limited number of expert demonstrations are available. In this setting, methods that rely on large-scale pretraining or high-capacity architectures can be difficult to apply, and efficiency with respect to demonstration data becomes critical. We introd... | Noise-Guided Transport (NGT) is a lightweight off-policy imitation learning method for low-data settings that frames imitation as an optimal transport problem solved adversarially. | ['Reinforcement Learning', 'Imitation Learning', 'Optimal Transport'] | /pdf/49e8dec3c62f05697607971675455aa24aa3f266.pdf | reinforcement learning | /attachment/28b67c54fe2e1fa1f69bd34b3ecd7932a59907d0.zip | ['ICLR.cc/2026/Conference/Submission24926/Authors'] |
MtdNbFQp5O | 24,924 | MtdNbFQp5O | Single LLM Debate, MoLaCE: Mixture of Latent Concept Experts Against Confirmation Bias | Large language models (LLMs) are highly vulnerable to input confirmation bias. When a prompt implies a preferred answer, models often reinforce that bias rather than explore alternatives.
This phenomenon remains underexplored, yet it is already harmful in base models and poses an even greater risk in multi-agent debat... | null | ['LLM', 'Question Answering', 'Bias'] | /pdf/ee5a1f085efa5995fe9dbcd8328b9c0452bc2514.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24924/Authors'] |
2qS5fes4RL | 24,923 | 2qS5fes4RL | EchoRAG: A Cognitive Memory-Inspired Framework for RAG with Semantic Gist | Retrieval-Augmented Generation (RAG), a pivotal technology connecting external knowledge with large language models, has been widely applied in various knowledge-intensive tasks. However, due to the inherent discrete representation of textual information and retrieval paradigms in current mainstream RAG systems, there ... | null | ['Retrieval-augmented generation', 'Large language models'] | /pdf/4e8f209159adc29afb710e418e26221cf7b92b33.pdf | applications to computer vision, audio, language, and other modalities | /attachment/1fd57277eddfbef2220db500445f8fd7ec046af3.zip | ['ICLR.cc/2026/Conference/Submission24923/Authors'] |
LRpJ5sYgcy | 24,922 | LRpJ5sYgcy | BayesShift: Evolving Domain Generalization via Hamiltonian Monte Carlo | Evolving Domain Generalization (EDG) addresses learning scenarios where the data distribution evolves over time, a setting crucial for real-world applications under varying environmental conditions. Recently, structure-aware variational models have shown promise by disentangling static and variant information, but thei... | We propose a full Bayesian framework that parameterizes a latent structure-aware autoencoder to capture static features, distribution drift, and categorical shifts, leveraging Hamiltonian Monte Carlo to approximate the posterior over latent variables | ['Evolving Domain Generalization', 'Hamiltonian Monte Carlo', 'Variational Autoencoder'] | /pdf/be5487dd2334950f8cdef6b65e19ea865dc87974.pdf | transfer learning, meta learning, and lifelong learning | /attachment/94f28a1615084008707a86064cca2689ee50e413.zip | ['ICLR.cc/2026/Conference/Submission24922/Authors'] |
oKmnyMNLGT | 24,920 | oKmnyMNLGT | Scaling Language Model Reliability via Determinantal Point Process Prompt Sampling | Language models achieve stronger performance when given multiple opportunities to solve a task, as in best-of-$N$ inference. However, naive approaches to scaling at test time—such as high-temperature sampling or random prompt ensembling—suffer from correlated failures, where many attempts repeat the same mistakes. We a... | null | ['Language Model Reliability', 'Prompt Sampling', 'Determinantal Point Process'] | /pdf/7c428664b0f8a53f0dd998e39c78551093cc7d72.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission24920/Authors'] |
y1OWj26FCo | 24,916 | y1OWj26FCo | Programming by Backprop: Learning Behaviour from Symbolic Descriptions | Large language models (LLMs) are typically trained to acquire behaviours from demonstrations or experience, yet much of their training data consists of symbolic descriptions: instructions, rules, and strategies that specify procedures without examples. We investigate whether LLMs can learn to execute such behaviours di... | LLMs can learn to execute procedures that are described symbolically in their training data, but only with specific finetuning curricula. | ['Large Language Models', 'Abstraction', 'Procedural Knowledge'] | /pdf/0dfdbde6c5f44af2f328e183bda747836949283f.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24916/Authors'] |
t3FSGlOcsG | 24,915 | t3FSGlOcsG | FedHyMoe: Hypernetwork-Driven Mixture-of-Experts for Federated Domain Generalization | Federated Learning (FL) enables collaborative model training without sharing raw data, but most existing solutions implicitly assume that each client’s data originate from a single homogeneous domain. In practice, domain shift is pervasive: clients gather data from diverse sources, domains are heterogeneously distribut... | TL;DR: **FedHyMoe uses hypernetworks with client embeddings to synthesize Mixture-of-Experts adapters, enabling robust, efficient, and privacy-preserving domain generalization in federated learning under heterogeneity and partial participation.** | ['Federated Learning', 'Domain Generalization', 'Hypernetworks', 'Mixture-of-Experts', 'Privacy-Preserving Learning', 'Cross-Domain Adaptation'] | /pdf/f2243f77eacb3fb6ff575638a8a73e87426f75ba.pdf | other topics in machine learning (i.e., none of the above) | null | ['ICLR.cc/2026/Conference/Submission24915/Authors'] |
dYaIotpCiK | 24,913 | dYaIotpCiK | Self-Guided Plan Extraction for Instruction-Following Tasks with Goal-Conditional Reinforcement Learning | We introduce a framework for instruction-following tasks. Unlike prior methods that rely on predefined subtasks, our approach enables a language model to generate and refine high-level plans through a self-learning mechanism, reducing the need for manual dataset annotation. The method involves iterative co-training: an... | A self-improving framework couples language-model plan generation with reinforcement learning feedback to achieve robust, generalizable instruction following without predefined subtasks. | ['Instruction Following; Reinforcement Learning; Multimodal RL'] | /pdf/d88db8cd1ca1c6f2aa39e74cc65237ced4cde352.pdf | applications to robotics, autonomy, planning | null | ['ICLR.cc/2026/Conference/Submission24913/Authors'] |
f43lpq1Q8i | 24,912 | f43lpq1Q8i | When Validity Isn't Enough: Reliability Gaps in Molecular Generation and KRAS Case Study | Molecule generation remains a core challenge in computational chemistry. Practical use of generative models is complicated by strict chemical, structural, and biological constraints: candidate compounds must satisfy physicochemical bounds, avoid reactive or toxic substructures, be synthesizable, and plausibly bind a ta... | null | ['Molecule Generation', 'Generative Models', 'KRAS', 'Benchmark'] | /pdf/98e09c2d0c1aaf8716b2aa0002ed353623b602ff.pdf | datasets and benchmarks | /attachment/836d7f40afd35369c91e74860a15bbb416e13469.zip | ['ICLR.cc/2026/Conference/Submission24912/Authors'] |
XKLPlnfZzM | 24,911 | XKLPlnfZzM | Learning to Deaggregate: Large-scale Trajectory Generation with Spatial Priors | Generating realistic large-scale trajectories is essential for applications in urban mobility and transportation, yet current generative models either do not offer any controllability or rely on strong sample-specific conditioning. We introduce the Temporal Deaggregation Diffusion Model (TDDM), a hierarchical framework... | null | ['trajectory generation', 'deaggregation', 'spatial priors', 'urban mobility', 'diffusion models'] | /pdf/45dfd52680bd33c4ae1547c0110e64cbc9ce177f.pdf | generative models | null | ['ICLR.cc/2026/Conference/Submission24911/Authors'] |
CPxZClPMiy | 24,910 | CPxZClPMiy | Aria: an Agent for Retrieval and Iterative Auto-Formalization via Dependency Graph | Accurate auto-formalization of theorem statements is essential for advancing automated discovery and verification of research-level mathematics, yet remains a major bottleneck for LLMs due to hallucinations, semantic mismatches, and their inability to synthesize new definitions.
To tackle these issues, we present Aria ... | null | ['Lean 4', 'Autoformalization', 'LLM', 'Graph-of-Thought', 'Retrieval Augmented Generation'] | /pdf/e212662185b551e06441435260d5f375f2bc6aec.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission24910/Authors'] |
Tf29oMgErW | 24,909 | Tf29oMgErW | ReynoldsFlow: Spatiotemporal Flow Representations for Video Learning | Representation learning for videos has largely relied on spatiotemporal modules embedded in deep architectures, which, while effective, often require heavy computation and heuristic design. Existing approaches, such as 3D convolutional modules or optical flow networks, may also overlook changes in illumination, scale v... | We propose ReynoldsFlow, a physics-inspired spatiotemporal flow representation that is lightweight, interpretable, and robust to photometric and structural variations for efficient video representation learning. | ['Video Representation Learning', 'Spatiotemporal Modeling', 'Physics-Inspired Flow', 'Helmholtz Decomposition', 'Reynolds Transport Theorem'] | /pdf/f684429d37a7e3c2489e07985f6eda3f13e9c2d7.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission24909/Authors'] |
YsVQBe0HNA | 24,907 | YsVQBe0HNA | BatonVoice: An Operationalist Framework for Enhancing Controllable Speech Synthesis with Linguistic Intelligence from LLMs | The rise of Large Language Models (LLMs) is reshaping multimodel models, with speech synthesis being a prominent application. However, existing approaches often underutilize the linguistic intelligence of these models, typically failing to leverage their powerful instruction-following capabilities. This limitation hind... | null | ['LLM', 'TTS'] | /pdf/3da98520a275f06fcd211b524f04bea1fe84f9e6.pdf | foundation or frontier models, including LLMs | /attachment/b7975f0643fdb45924b946567981e470a3238593.zip | ['ICLR.cc/2026/Conference/Submission24907/Authors'] |
DVnK3ZgG9D | 24,905 | DVnK3ZgG9D | Empowering Channel-of-Mobile-Experts with Informative Hybrid-Capabilities Reasoning | Mobile Agents can autonomously execute user instructions, which requires hybrid-capabilities reasoning, including screen summary, subtask planning, action decision and action function. However, existing agents struggle to achieve both decoupled enhancement and balanced integration of these capabilities. While Mixture-o... | We propose Channel-of-Mobile-Experts (CoME) to enhance hybrid-capabilities reasoning on mobile task automation, via infomation gain driven DPO | ['Mobile Agent', 'Hybrid-Capabilities Reasoning'] | /pdf/3c5e3666d13bfcfb91f58f4e4fd6ec14e0487d5b.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission24905/Authors'] |
nTWZCXrnvs | 24,902 | nTWZCXrnvs | FedDefuse: Mitigating Strategic Model Poisoning for Federated Learning via Divide-and-Compute Driven Composite Behavioral Analysis | Federated Learning (FL) enables collaborative model training across distributed clients without sharing local data, but it is highly vulnerable to strategic model poisoning, where adversaries dominate participation rounds and may selectively launch arbitrary attacks under non-i.i.d. data. Existing defenses, often relyi... | null | ['Federated learning', 'model poisoning attack', 'non-i.i.d. data', 'composite behavioral pattern'] | /pdf/171c619e6dfb35a965c3d0bef10b80c735a17bb5.pdf | other topics in machine learning (i.e., none of the above) | null | ['ICLR.cc/2026/Conference/Submission24902/Authors'] |
Lx4MhiIzhH | 24,901 | Lx4MhiIzhH | Automated Overrefusal Prompt Generation and Repair with Delta Debugging | While safety alignment and guardrails help large language models (LLMs) avoid harmful outputs, they also introduce the risk of overrefusal—unwarranted rejection of benign queries that only appear risky. We introduce DDOR (Delta Debugging for OverRefusal), a fully automated, causally grounded framework that generates in... | We introduce DDOR, an automated, causally grounded framework that detects and reduces LLM overrefusal by extracting interpretable refusal triggers. | ['overrefusal', 'llm', 'delta debugging', 'safety alignment', 'prompt repair'] | /pdf/0fc3a586c5a02de384274c9181b65457b35666c2.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/690429d5f9bdc24de861fb00aeb1deaeeaa7f638.zip | ['ICLR.cc/2026/Conference/Submission24901/Authors'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.